NGDP targeting has been gaining popularity recently. But targeting market-based inflation forecasts will be about as good under most conditions [1], and we have good markets that forecast the U.S. inflation rate [2].

Those forecasts have a track record that starts in 2003. The track record seems quite consistent with my impressions about when the Fed should have adopted a more inflationary policy (to promote growth and to get inflation expectations up to 2% [3]) and when it should have adopted a less inflationary policy (to avoid fueling the housing bubble). It’s probably a bit controversial to say that the Fed should have had a less inflationary policy from February through July or August of 2008. But my impression (from reading the stock market) is that NGDP futures would have said roughly the same thing. The inflation forecasts sent a clear signal starting in very early September 2008 that Fed policy was too tight, and that’s about when other forms of hindsight switch from muddled to saying clearly that Fed policy was dangerously tight.

Why do I mention this now? The inflation forecast dropped below 1 percent two weeks ago for the first time since May 2008. So the Fed’s stated policies conflict with what a more reputable source of information says the Fed will accomplish. This looks like what we’d see if the Fed was in the process of causing a mild recession to prevent an imaginary increase in inflation.

What does the Fed think it’s doing?

  • It might be relying on interest rates to estimate what it’s policies will produce. Interest rates this low after 6.5 years of economic expansion resemble historical examples of loose monetary policy more than they resemble the stereotype of tight monetary policy [4].
  • The Fed could be following a version of the Taylor Rule. Given standard guesses about the output gap and equilibrium real interest rate [5], the Taylor Rule says interest rates ought to be rising now. The Taylor Rule has usually been at least as good as actual Fed policy at targeting inflation indirectly through targeting interest rates. But that doesn’t explain why the Fed targets interest rates when that conflicts with targeting market forecasts of inflation.
  • The Fed could be influenced by status quo bias: interest rates and unemployment are familiar types of evidence to use, whereas unbiased inflation forecasts are slightly novel.
  • Could the Fed be reacting to money supply growth? Not in any obvious way: the monetary base stopped growing about two years ago, M1 and MZM growth are slowing slightly, and M2 accelerated recently (but only after much of the Fed’s tightening).

Scott Sumner’s rants against reasoning from interest rates explain why the Fed ought to be embarrassed to use interest rates to figure out whether Fed policy is loose or tight.

Yet some institutional incentives encourage the Fed to target interest rates rather than predicted inflation. It feels like an appropriate use of high-status labor to set interest rates once every few weeks based on new discussion of expert wisdom. Switching to more or less mechanical responses to routine bond price changes would undercut much of the reason for believing that the Fed’s leaders are doing high-status work.

The news media storytellers would have trouble finding entertaining ways of reporting adjustments that consisted of small hourly responses to bond market changes. Whereas decisions made a few times per year are uncommon enough to be genuinely newsworthy. And meetings where hawks struggle against doves fit our instinctive stereotype for important news better than following a rule does. So I see little hope that storytellers will want to abandon their focus on interest rates. Do the Fed governors follow the storytellers closely enough that the storytellers’ attention strongly affects the Fed’s attention? Would we be better off if we could ban the Fed from seeing any source of daily stories?

Do any other interest groups prefer stable interest rates over stable inflation rates? I expect a wide range of preferences among Wall Street firms, but I’m unaware which preferences are dominant there.

Consumers presumably prefer that their banks, credit cards, etc have predictable interest rates. But I’m skeptical that the Fed feels much pressure to satisfy those preferences.

We need to fight those pressures by laughing at people who claim that the Fed is easing when markets predict below-target inflation (as in the fall of 2008) or that the Fed is tightening when markets predict above-target inflation (e.g. much of 2004).

P.S. – The risk-reward ratio for the stock market today is much worse than normal. I’m not as bearish as I was in October 2008, but I’ve positioned myself much more cautiously than normal.

Notes:

[1] – They appear to produce nearly identical advice under most conditions that the U.S. has experienced recently.

I expect inflation targeting to be modestly safer than NGDP targeting. I may get around to explaining my reasons for that in a separate post.

[2] – The link above gives daily forecasts of the 5 year CPI inflation rate. See here for some longer time periods.

The markets used to calculate these forecasts have enough liquidity that it would be hard for critics to claim that they could be manipulated by entities less powerful than the Fed. I expect some critics to claim that anyway.

[3] – I’m accepting the standard assumption that 2% inflation is desirable, in order to keep this post simple. Figuring out the optimal inflation rate is too hard for me to tackle any time soon. A predictable inflation rate is clearly desirable, which creates some benefits to following a standard that many experts agree on.

[4] – providing that you don’t pay much attention to Japan since 1990.

[5] – guesses which are error-prone and, if a more direct way of targeting inflation is feasible, unnecessary. The conflict between the markets’ inflation forecast and the Taylor Rule’s implication that near-zero interest rates would cause inflation to rise suggests that we should doubt those guesses. I’m pretty sure that equilibrium interest rates are lower than the standard assumptions. I don’t know what to believe about the output gap.

Book review: The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century, by Steven Pinker.

Pinker provides great examples of readable writing, and insights about what styles are easy to read.

But the book is more forgetable than Sense of Structure (which covers similar subjects). Sense of Structure is more valuable because it’s more oriented toward training its readers.

Sense of Structure focuses on how to improve mediocre sentences that I might have been tempted to write. Pinker devotes a bit too much attention to making fun of bad sentences that don’t hold my attention because they don’t look similar enough to mediocre sentences which I might write.

There difference in style between the two books is modest, but modest differences matter for tasks such as this which take a good deal of willpower to master.

My first year of eating no factory farmed vertebrates went fairly well.

When eating at home, it took no extra cost or effort to stick to the diet.

I’ve become less comfortable eating at restaurants, because I find few acceptable choices at most restaurants, and because poor labeling has caused me to mistakenly get food that wasn’t on my diet.

The constraints were strict enough that I lost about 4 pounds during 8 days away from home over the holidays. That may have been healthier than the weight gain I succumbed to during similar travels in prior years, but that weight loss is close to the limit of what I find comfortable.

In theory, I should have gotten enough flexibility from my rule to allow 120 calories per month of unethical animal products for me to be mostly comfortable with restaurant food. In practice, I found it psychologically easier to adopt an identity of someone who doesn’t eat any factory farmed vertebrates than it would have been to feel comfortable using up the 120 calorie quota. That made me reluctant to use any flexibility.

The quota may have been valuable for avoiding a feeling of failure when I made mistakes.

Berkeley is a relatively easy place to adopt this diet, thanks to Marin Sun Farms and Mission Heirloom. Pasture-raised eggs are fairly easy to find in the bay area (Berkeley Bowl, Whole Foods, etc).

I still have some unresolved doubts about how much to trust labels. Pasture-raised eggs are available in Colorado in winter, but chicken meat is reportedly unavailable due to weather-related limits on keeping chickens outdoors. Why doesn’t that reasoning also apply to eggs?

I’m still looking for a good substitute for Questbars. These come closest:

For most people, it would be hard enough to follow my diet strictly that I recommend starting with an easier version. One option would be to avoid factory farmed chicken/eggs (i.e. focus on the avoiding the cruelest choices). And please discriminate against restaurants that don’t label their food informatively.

I plan to continue my diet essentially unchanged, with maybe slightly less worry about what I eat when traveling or at parties.

Connectomes are not sufficient by themselves to model brain behavior. Brain modeling has been limited more by the need for good information about the dynamic behavior of individual neurons.

The paper Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans looks like an important step toward overcoming this limitation. The authors observed the behavior of many individual neurons in a moving nematode.

They still can’t reliably map the neurons they observed to standard C. elegans neuron names:

The neural position validation experiments presented here, however, have led us to conclude that worm-to-worm variability in neuronal position in the head is large enough to pose a formidable challenge for neuron identification.

But there are enough hints about which neurons do what that I’m confident this problem can be solved if enough effort is devoted to it.

My biggest uncertainty concerns applying this approach to mammalian brains. Mammalian brains aren’t transparent enough to be imaged this way. Are C. elegans neurons similar enough that we can just apply the same models to both? I suspect not.

Book review: Hive Mind: How your nation’s IQ matters so much more than your own, by Garett Jones.

Hive Mind is a solid and easy to read discussion of why high IQ nations are more successful than low IQ nations.

There’s a pretty clear correlation between national IQ and important results such as income. It’s harder to tell how much of the correlation is caused by IQ differences. The Flynn Effect hints that high IQ could instead be a symptom of increased wealth.

The best evidence for IQ causing wealth (more than being caused by wealth) is that Hong Kong and Taiwan had high IQs back in the 1960s, before becoming rich.

Another piece of similar evidence (which Hive Mind doesn’t point to) is that Saudi Arabia is the most conspicuous case of a country that became wealthy via luck. Its IQ is lower than countries of comparable wealth, and lower than neighbors of similar culture/genes.

Much of the book is devoted to speculations about how IQ could affect a nation’s success.

High IQ is associated with more patience, probably due to better ability to imagine the future:

Imagine two societies: one in which the future feels like a dim shadow, the other in which the future seems a real as now. Which society will have more restaurants that care about repeat customers? Which society will have more politicians who turn down bribes because they worry about eventually getting caught?

Hive Mind describes many possible causes of the Flynn Effect, without expressing much of a preference between them. Flynn’s explanation still seems strongest to me. The most plausible alternative that Hive Mind mentions is anxiety and stress from poverty-related problems distracting people during tests (and possibly also from developing abstract cognitive skills). But anxiety / stress explanations seem less likely to produce the Hong Kong/Taiwan/Saudi Arabia results.

Hive Mind talks about the importance of raising national IQ, especially in less-developed countries. That goal would be feasible if differences in IQ were mainly caused by stress or nutrition. Flynn’s cultural explanation points to causes that are harder for governments or charities to influence (how do you legislate an increased desire to think abstractly?).

What about the genetic differences that contribute to IQ differences? The technology needed to fix that contributing factor to low IQs is not ready today, but looks near enough that we should pay attention. Hive Mind implies [but avoids saying] that potentially large harm from leaving IQ unchanged could outweigh the risks of genetic engineering. Fears about genetic engineering of IQ often involve fears of competition, but Hive Mind shows that higher IQ means more cooperation. More cooperation suggests less war, less risk of dangerous nanotech arms races, etc.

It shouldn’t sound paradoxical to say that aggregate IQ matters more than individual IQ. It should start to seem ordinary if more people follow the example of Hive Mind and focus more attention on group success than on individual success as they relate to IQ.

Book review: The Eureka Factor: Aha Moments, Creative Insight, and the Brain, by John Kounios and Mark Beeman.

This book shows that insight and analysis are different modes of thought, and that small interventions can influence how insightful we are. It’s done in a clearly analytical (not insightful) style.

They devote a good deal of effort to demonstrating that the two modes of thought differ in more ways than simply how people report them. It’s unclear why that would surprise anyone now that behaviorism is unpopular. Nor is it clear what use we can make of evidence that different parts of the brain are involved in the two modes.

I’m mildly impressed that researchers are able to objectively measure insight at all. They mostly study word problems that can be solved on something like 30 seconds. They provide some hints that those experiments study the same patterns of thought that are used to solve big tasks that simmer in our subconscious for days. But there’s some risk that the research is overlooking something unique to those harder problems.

The “creativity crisis” could have been an important part of the book. But their brief explanation is to blame the obvious suspects: environments of constant stimulation due to social media, cellphones, games, etc.

One problem with that explanation is that the decline in creativity scores since 1990 is strongest in kindergartners through 3rd graders. I don’t find it very plausible that they’ve experienced a larger increase in those hyper-stimuli than older kids have.

It’s almost as if the authors got their understanding of the alleged crisis from a blog post rather than from the peer reviewed article that they cite.

The peer reviewed article suggests a better explanation: less time for free play.

Outdoor activity activity is valuable, according to the book, at least for short-term changes in whether our mood is creative. The “crisis” could be due to less recess time at school and a decline in free-range parenting. Were the tests taken shortly after a recess up through 1990, and taken after hours of lectures more recently? If so, the decline in measured creativity would reflect mostly short-term mood changes, leaving me uncertain whether I should worry about longer lasting effects.

The book provides some advice for being more insightful. It has caused me to schedule tasks that might require creativity after moderate hikes, or earlier in the day than I previously did.

The book has made me more likely to try applying ideas from the CFAR Againstness class to inducing creative moods.

The book hints at lots of room for computer games to promote a more insightful mood than the typical game does (e.g. via requiring players to expand their attention to fill the screen). But the authors aren’t very helpful at suggesting ways to identify games that are more insight-compatible. The closest I’ve come to practical ideas about games is that I ought to replace them when possible with fiction that promotes far-mode thinking(i.e. fantasy and science fiction).

My intuition says that insight research is still in its infancy, and that we should hope for better books in this category before long.

This post is partly a response to arguments for only donating to one charity and to an 80,000 Hours post arguing against diminishing returns. But I’ll focus mostly on AGI-risk charities.

Diversifying Donations?

The rule that I should only donate to one charity is a good presumption to start with. Most objections to it are due to motivations that diverge from pure utilitarian altruism. I don’t pretend that altruism is my only motive for donating, so I’m not too concerned that I only do a rough approximation of following that rule.

Still, I want to follow the rule more closely than most people do. So when I direct less than 90% of my donations to tax-deductible nonprofits, I feel a need to point to diminishing returns [1] to donations to justify that.

With AGI risk organizations, I expect the value of diversity to sometimes override the normal presumption even for purely altruistic utilitarians (with caveats about having the time needed to evaluate multiple organizations, and having more than a few thousand dollars to donate; those caveats will exclude many people from this advice, so this post is mainly oriented toward EAs who are earning to give or wealthier people).

Diminishing Returns?

Before explaining that, I’ll reply to the 80,000 Hours post about diminishing returns.

The 80,000 Hours post focuses on charities that mostly market causes to a wide audience. The economies of scale associated with brand recognition and social proof seem more plausible than any economies of scale available to research organizations.

The shortage of existential risk research seems more dangerous than any shortage of charities which are devoted to marketing causes, so I’m focusing on the most important existential risk.

I expect diminishing returns to be common after an organization grows beyond two or three people. One reason is that the founders of most organizations exert more influence than subsequent employees over important policy decisions [2], so at productive organizations founders are more valuable.

For research organizations that need the smartest people, the limited number of such people implies that only small organizations can have a large fraction of employees be highly qualified.

I expect donations to very young organizations to be more valuable than other donations (which implies diminishing returns to size on average):

  • It takes time to produce evidence that the organization is accomplishing something valuable, and donors quite sensibly prefer organizations that have provided such evidence.
  • Even when donors try to compensate for that by evaluating the charity’s mission statement or leader’s competence, it takes some time to adequately communicate those features (e.g. it’s rare for a charity to set up an impressive web site on day one).
  • It’s common for a charity to have suboptimal competence at fundraising until it grows large enough to hire someone with fundraising expertise.
  • Some charities are mainly funded by a few grants in the millions of dollars, and I’ve heard reports that those often take many months between being awarded and reaching the charities’ bank (not to mention delays in awarding the grants). This sometimes means months when a charity has trouble hiring anyone who demands an immediate salary.
  • Donors could in principle overcome these causes of bias, but as far as I can tell, few care about doing so. EA’s come a little closer to doing this than others, but my observations suggest that EA’s are almost as lazy about analyzing new charities as non EA’s.
  • Therefore, I expect young charities to be underfunded.

Why AGI risk research needs diversity

I see more danger of researchers pursuing useless approaches for existential risks in general, and AGI risks in particular (due partly to the inherent lack of feedback), than with other causes.

The most obvious way to reduce that danger is to encourage a wide variety of people and organizations to independently research risk mitigation strategies.

I worry about AGI-risk researchers focusing all their effort on a class of scenarios which rely on a false assumption.

The AI foom debate seems superficially like the main area where a false assumption might cause AGI research to end up mostly wasted. But there are enough influential people on both sides of this issue that I expect research to not ignore one side of that debate for long.

I worry more about assumptions that no prominent people question.

I’ll describe how such an assumption might look in hindsight via an analogy to some leading developers of software intended to accomplish what the web ended up accomplishing [3].

Xanadu stood out as the leading developer of global hypertext software in the 1980s to about the same extent that MIRI stands out as the leading AGI-risk research organization. One reason [4] that Xanadu accomplished little was the assumption that they needed to make money. Part of why that seemed obvious in the 1980s was that there were no ISPs delivering an internet-like platform to ordinary people, and hardware costs were a big obstacle to anyone who wanted to provide that functionality. The hardware costs declined at a predictable enough rate that Drexler was able to predict in Engines of Creation (published in 1986) that ordinary people would get web-like functionality within a decade.

A more disturbing reason for assuming that web functionality needed to make a profit was the ideology surrounding private property. People who opposed private ownership of home, farms, factories, etc. were causing major problems. Most of us automatically treated ownership of software as working the same way as physical property.

People who are too young to remember attitudes toward free / open source software before about 1997 will have some trouble believing how reluctant people were to imagine valuable software being free. [5] Attitudes changed unusually fast due to the demise of communism and the availability of affordable internet access.

A few people (such as RMS) overcame the focus on cold war issues, but were too eccentric to convert many followers. We should pay attention to people with similarly eccentric AGI-risk views.

If I had to guess what faulty assumption AGI-risk researchers are making, I’d say something like faulty guesses about the nature of intelligence or the architecture of feasible AGIs. But the assumptions that look suspicious to me are ones that some moderately prominent people have questioned.

Vague intuitions along these lines have led me to delay some of my potential existential-risk donations in hopes that I’ll discover (or help create?) some newly created existential-risk projects which produce more value per dollar.

Conclusions

How does this affect my current giving pattern?

My favorite charity is CFAR (around 75 or 80% of my donations), which improves the effectiveness of people who might start new AGI-risk organizations or AGI-development organizations. I’ve had varied impressions about whether additional donations to CFAR have had diminishing returns. They seem to have been getting just barely enough money to hire employees they consider important.

FLI is a decent example of a possibly valuable organization that CFAR played some hard-to-quantify role in starting. It bears a superficial resemblance to an optimal incubator for additional AGI-risk research groups. But FLI seems too focused on mainstream researchers to have much hope of finding the eccentric ideas that I’m most concerned about AGI-researchers overlooking.

Ideally I’d be donating to one or two new AGI-risk startups per year. Conditions seem almost right for this. New AGI-risk organizations are being created at a good rate, mostly getting a few large grants that are probably encouraging them to focus on relatively mainstream views [6].

CSER and FLI sort of fit this category briefly last year before getting large grants, and I donated moderate amounts to them. I presume I didn’t give enough to them for diminishing returns to be important, but their windows of unusual need were short enough that I might well have come close to that.

I’m a little surprised that the increasing interest in this area doesn’t seem to be catalyzing the formation of more low-budget groups pursuing more unusual strategies. Please let me know of any that I’m overlooking.

See my favorite charities web page (recently updated) for more thoughts about specific charities.

[1] – Diminishing returns are the main way that donating to multiple charities at one time can be reconciled with utilitarian altruism.

[2] – I don’t know whether it ought to work this way, but I expect this pattern to continue.

[3] – they intended to accomplish a much more ambitious set of goals.

[4] – probably not the main reason.

[5] – presumably the people who were sympathetic to communism weren’t attracted to small software projects (too busy with politics?) or rejected working on software due to the expectation that it required working for evil capitalists.

[6] – The short-term effects are probably good, increasing the diversity of approaches compared to what would be the case if MIRI were the only AGI-risk organization, and reducing the risk that AGI researchers would become polarized into tribes that disagree about whether AGI is dangerous. But a field dominated by a few funders tends to focus on fewer ideas than one with many funders.

I’d like to see more discussion of uploaded ape risks.

There is substantial disagreement over how fast an uploaded mind (em) would improve its abilities or the abilities of its progeny. I’d like to start by analyzing a scenario where it takes between one and ten years for an uploaded bonobo to achieve human-level cognitive abilities. This scenario seems plausible, although I’ve selected it more to illustrate a risk that can be mitigated than because of arguments about how likely it is.

I claim we should anticipate at least a 20% chance a human-level bonobo-derived em would improve at least as quickly as a human that uploaded later.

Considerations that weigh in favor of this are: that bonobo minds seem to be about as general-purpose as humans, including near-human language ability; and the likely ease of ems interfacing with other software will enable them to learn new skills faster than biological minds will.

The most concrete evidence that weighs against this is the modest correlation between IQ and brain size. It’s somewhat plausible that it’s hard to usefully add many neurons to an existing mind, and that bonobo brain size represents an important cognitive constraint.

I’m not happy about analyzing what happens when another species develops more powerful cognitive abilities than humans, so I’d prefer to have some humans upload before the bonobos become superhuman.

A few people worry that uploading a mouse brain will generate enough understanding of intelligence to quickly produce human-level AGI. I doubt that biological intelligence is simple / intelligible enough for that to work. So I focus more on small tweaks: the kind of social pressures which caused the Flynn Effect in humans, selective breeding (in the sense of making many copies of the smartest ems, with small changes to some copies), and faster software/hardware.

The risks seem dependent on the environment in which the ems live and on the incentives that might drive their owners to improve em abilities. The most obvious motives for uploading bonobos (research into problems affecting humans, and into human uploading) create only weak incentives to improve the ems. But there are many other possibilities: military use, interesting NPCs, or financial companies looking for interesting patterns in large databases. No single one of those looks especially likely, but with many ways for things to go wrong, the risks add up.

What could cause a long window between bonobo uploading and human uploading? Ethical and legal barriers to human uploading, motivated by risks to the humans being uploaded and by concerns about human ems driving human wages down.

What could we do about this risk?

Political activism may mitigate the risks of hostility to human uploading, but if done carelessly it could create a backlash which worsens the problem.

Conceivably safety regulations could restrict em ownership/use to people with little incentive to improve the ems, but rules that looked promising would still leave me worried about risks such as irresponsible people hacking into computers that run ems and stealing copies.

A more sophisticated approach is to improve the incentives to upload humans. I expect the timing of the first human uploads to be fairly sensitive to whether we have legal rules which enable us to predict who will own em labor. But just writing clear rules isn’t enough – how can we ensure political support for them at a time when we should expect disputes over whether they’re people?

We could also find ways to delay ape uploading. But most ways of doing that would also delay human uploading, which creates tradeoffs that I’m not too happy with (partly due to my desire to upload before aging damages me too much).

If a delay between bonobo and human uploading is dangerous, then we should also ask about dangers from other uploaded species. My intuition says the risks are much lower, since it seems like there are few technical obstacles to uploading a bonobo brain shortly after uploading mice or other small vertebrates.

But I get the impression that many people associated with MIRI worry about risks of uploaded mice, and I don’t have strong evidence that I’m wiser than they are. I encourage people to develop better analyses of this issue.

Book review: The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition, by Gregory Hickok.

This book criticizes hype from scientists and the media about embodied cognition, mirror neurons, and the differences between the left and right brain hemispheres. Popular accounts of these ideas contain a little bit of truth, but most versions either explain very little or provide misleading explanations.

A good deal of our cognition is embodied in the sense that it’s heavily dependent on sensory and motor activity. But we have many high-level thoughts that don’t fit this model well, such as those we generate when we don’t have sensory or motor interactions that are worth our attention (often misleading called a “resting state”).

Humans probably have mirror neurons. They have some value in helping us imitate others. But that doesn’t mean they have much affect on our ability to understand what we’re imitating. Our ability to understand a dog wagging its tail isn’t impaired by our inability to wag our tails. Parrots’ ability to imitate our speech isn’t very effective at helping them understand it.

Mirror neurons have also been used to promote the “broken mirror theory” of autism (with the suggestion that a malfunction related to mirror neurons impairs empathy). Hickok shows that the intense world hypothesis (which I’ve blogged about before) is more consistent with the available evidence.

The book clarified my understanding of the brain a bit. But most of it seems unimportant. I had sort of accepted mild versions of the mirror neuron and left-brain, right brain hype, but doing so didn’t have any obvious effects on my other beliefs or my actions. It was only at the book’s end (discussing autism) that I could see how the hype might matter.

Most of the ideas that he criticizes don’t do much harm, because they wouldn’t pay much rent if true. Identifying which neurons do what has negligible effect on how I model a person’s mind unless I’m doing something unusual like brain surgery.

One small part of the recent (June 2015) CFAR workshop caused a significant improvement in how I interact with people. I’ve become more spontaneous about interacting with people.

For several years I’ve suspected that I ought to learn how to do improv-style exercises, but standard improv classes felt ineffective. I’ve since figured out that their implied obligation for me to come up with something to say caused some sort of negative association with attempts at spontaneity when I failed to think of anything to say. That negative reaction was a large obstacle to learning new habits.

Deeply ingrained habits seem to cause some part of my subconscious mind that searches for ideas or generates words to decide that it can’t come up with anything worthy of conscious attention. That leaves me in a state that I roughly describe as a blank mind (i.e. either no verbal content at the conscious level, or I generate not-very-useful meta-thoughts reacting to the lack of appropriate words).

Since I much more frequently regret failing to say something than I regret mistakenly saying something hastily that I should have known not to say, it seems like I’ve got one or more subconscious filters that has consistently erred in being too cautious about generating speech. I tried introspecting for ways to simply tell that filter to be less cautious, but I accomplished nothing that way.

I also tried paying attention to signs that I’d filtered something out (pauses in my flow of words seem to be reliable indicators) in hopes that I could sometimes identify the discarded thoughts. I hoped to reward myself for noticing the ideas as the filter started to discard them, and train the filter to learn that I value conscious access to those ideas. Yet I never seem to detect those ideas, so that strategy failed.

What finally worked was that I practiced informal versions of improv exercises in which I rewarded myself [*] for saying silly things (alone or in a practice session with Robert) without putting myself in a situation where I felt an immediate obligation to say anything unusual.

In a few weeks I could tell that I was more confident in social contexts and more able to come up with things to say.

I feel less introverted, in the sense that a given amount of conversation tires me less than it used to. Blogging also seems to require a bit less energy.

I feel somewhat less anxiety (and relatedly, less distraction from background noise), maybe due to my increased social confidence.

I may have become slightly more creative in a variety of contexts.

I hypothesize that the filtering module was rather attached to a feeling of identity along the lines of “Peter is a person who is cautious about what he says” long after the consciously accessible parts of my mind decided I should weaken that identity. Actually trying out a different identity was more important to altering some beliefs that were deeply buried in my subconscious than was conscious choice about what to believe.

I wonder what other subconscious attachments to an identity are constraining me?

Something still seems missing from my social interactions: I still tend to feel passive and become just a spectator. That seems like a promising candidate for an area where I ought to alter some subconscious beliefs. But I find it harder to focus on a comfortable vision for an alternative identity: aiming to be a leader in a group conversation feels uncomfortable in a way that aiming to be spontaneous/creative never felt.

Thanks to John Salvatier and Anna Salamon for the advice that helped me accomplish this.

[*] – I only know how to do very weak self-rewards (telling myself to be happy), but that was all I needed.