I’d like to see more discussion of uploaded ape risks.

There is substantial disagreement over how fast an uploaded mind (em) would improve its abilities or the abilities of its progeny. I’d like to start by analyzing a scenario where it takes between one and ten years for an uploaded bonobo to achieve human-level cognitive abilities. This scenario seems plausible, although I’ve selected it more to illustrate a risk that can be mitigated than because of arguments about how likely it is.

I claim we should anticipate at least a 20% chance a human-level bonobo-derived em would improve at least as quickly as a human that uploaded later.

Considerations that weigh in favor of this are: that bonobo minds seem to be about as general-purpose as humans, including near-human language ability; and the likely ease of ems interfacing with other software will enable them to learn new skills faster than biological minds will.

The most concrete evidence that weighs against this is the modest correlation between IQ and brain size. It’s somewhat plausible that it’s hard to usefully add many neurons to an existing mind, and that bonobo brain size represents an important cognitive constraint.

I’m not happy about analyzing what happens when another species develops more powerful cognitive abilities than humans, so I’d prefer to have some humans upload before the bonobos become superhuman.

A few people worry that uploading a mouse brain will generate enough understanding of intelligence to quickly produce human-level AGI. I doubt that biological intelligence is simple / intelligible enough for that to work. So I focus more on small tweaks: the kind of social pressures which caused the Flynn Effect in humans, selective breeding (in the sense of making many copies of the smartest ems, with small changes to some copies), and faster software/hardware.

The risks seem dependent on the environment in which the ems live and on the incentives that might drive their owners to improve em abilities. The most obvious motives for uploading bonobos (research into problems affecting humans, and into human uploading) create only weak incentives to improve the ems. But there are many other possibilities: military use, interesting NPCs, or financial companies looking for interesting patterns in large databases. No single one of those looks especially likely, but with many ways for things to go wrong, the risks add up.

What could cause a long window between bonobo uploading and human uploading? Ethical and legal barriers to human uploading, motivated by risks to the humans being uploaded and by concerns about human ems driving human wages down.

What could we do about this risk?

Political activism may mitigate the risks of hostility to human uploading, but if done carelessly it could create a backlash which worsens the problem.

Conceivably safety regulations could restrict em ownership/use to people with little incentive to improve the ems, but rules that looked promising would still leave me worried about risks such as irresponsible people hacking into computers that run ems and stealing copies.

A more sophisticated approach is to improve the incentives to upload humans. I expect the timing of the first human uploads to be fairly sensitive to whether we have legal rules which enable us to predict who will own em labor. But just writing clear rules isn’t enough – how can we ensure political support for them at a time when we should expect disputes over whether they’re people?

We could also find ways to delay ape uploading. But most ways of doing that would also delay human uploading, which creates tradeoffs that I’m not too happy with (partly due to my desire to upload before aging damages me too much).

If a delay between bonobo and human uploading is dangerous, then we should also ask about dangers from other uploaded species. My intuition says the risks are much lower, since it seems like there are few technical obstacles to uploading a bonobo brain shortly after uploading mice or other small vertebrates.

But I get the impression that many people associated with MIRI worry about risks of uploaded mice, and I don’t have strong evidence that I’m wiser than they are. I encourage people to develop better analyses of this issue.

Book review: The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition, by Gregory Hickok.

This book criticizes hype from scientists and the media about embodied cognition, mirror neurons, and the differences between the left and right brain hemispheres. Popular accounts of these ideas contain a little bit of truth, but most versions either explain very little or provide misleading explanations.

A good deal of our cognition is embodied in the sense that it’s heavily dependent on sensory and motor activity. But we have many high-level thoughts that don’t fit this model well, such as those we generate when we don’t have sensory or motor interactions that are worth our attention (often misleading called a “resting state”).

Humans probably have mirror neurons. They have some value in helping us imitate others. But that doesn’t mean they have much affect on our ability to understand what we’re imitating. Our ability to understand a dog wagging its tail isn’t impaired by our inability to wag our tails. Parrots’ ability to imitate our speech isn’t very effective at helping them understand it.

Mirror neurons have also been used to promote the “broken mirror theory” of autism (with the suggestion that a malfunction related to mirror neurons impairs empathy). Hickok shows that the intense world hypothesis (which I’ve blogged about before) is more consistent with the available evidence.

The book clarified my understanding of the brain a bit. But most of it seems unimportant. I had sort of accepted mild versions of the mirror neuron and left-brain, right brain hype, but doing so didn’t have any obvious effects on my other beliefs or my actions. It was only at the book’s end (discussing autism) that I could see how the hype might matter.

Most of the ideas that he criticizes don’t do much harm, because they wouldn’t pay much rent if true. Identifying which neurons do what has negligible effect on how I model a person’s mind unless I’m doing something unusual like brain surgery.

One small part of the recent (June 2015) CFAR workshop caused a significant improvement in how I interact with people. I’ve become more spontaneous about interacting with people.

For several years I’ve suspected that I ought to learn how to do improv-style exercises, but standard improv classes felt ineffective. I’ve since figured out that their implied obligation for me to come up with something to say caused some sort of negative association with attempts at spontaneity when I failed to think of anything to say. That negative reaction was a large obstacle to learning new habits.

Deeply ingrained habits seem to cause some part of my subconscious mind that searches for ideas or generates words to decide that it can’t come up with anything worthy of conscious attention. That leaves me in a state that I roughly describe as a blank mind (i.e. either no verbal content at the conscious level, or I generate not-very-useful meta-thoughts reacting to the lack of appropriate words).

Since I much more frequently regret failing to say something than I regret mistakenly saying something hastily that I should have known not to say, it seems like I’ve got one or more subconscious filters that has consistently erred in being too cautious about generating speech. I tried introspecting for ways to simply tell that filter to be less cautious, but I accomplished nothing that way.

I also tried paying attention to signs that I’d filtered something out (pauses in my flow of words seem to be reliable indicators) in hopes that I could sometimes identify the discarded thoughts. I hoped to reward myself for noticing the ideas as the filter started to discard them, and train the filter to learn that I value conscious access to those ideas. Yet I never seem to detect those ideas, so that strategy failed.

What finally worked was that I practiced informal versions of improv exercises in which I rewarded myself [*] for saying silly things (alone or in a practice session with Robert) without putting myself in a situation where I felt an immediate obligation to say anything unusual.

In a few weeks I could tell that I was more confident in social contexts and more able to come up with things to say.

I feel less introverted, in the sense that a given amount of conversation tires me less than it used to. Blogging also seems to require a bit less energy.

I feel somewhat less anxiety (and relatedly, less distraction from background noise), maybe due to my increased social confidence.

I may have become slightly more creative in a variety of contexts.

I hypothesize that the filtering module was rather attached to a feeling of identity along the lines of “Peter is a person who is cautious about what he says” long after the consciously accessible parts of my mind decided I should weaken that identity. Actually trying out a different identity was more important to altering some beliefs that were deeply buried in my subconscious than was conscious choice about what to believe.

I wonder what other subconscious attachments to an identity are constraining me?

Something still seems missing from my social interactions: I still tend to feel passive and become just a spectator. That seems like a promising candidate for an area where I ought to alter some subconscious beliefs. But I find it harder to focus on a comfortable vision for an alternative identity: aiming to be a leader in a group conversation feels uncomfortable in a way that aiming to be spontaneous/creative never felt.

Thanks to John Salvatier and Anna Salamon for the advice that helped me accomplish this.

[*] – I only know how to do very weak self-rewards (telling myself to be happy), but that was all I needed.

I was quite surprised by a paper (The Surprising Alpha From Malkiel’s Monkey and Upside-Down Strategies [PDF] by Robert D. Arnott, Jason Hsu, Vitali Kalesnik, and Phil Tindall) about “inverted” or upside-down[*] versions of some good-looking strategies for better-than-market-cap weighting of index funds.

They show that the inverse of low volatility and fundamental weighting strategies do about as well as or outperform the original strategies. Low volatility index funds still have better Sharpe ratios (risk-adjusted returns) than their inverses.

Their explanation is that most deviations from weighting by market capitalization will benefit from the size effect (small caps outperform large caps), and will also have some tendency to benefit from value effects. Weighting by market capitalization causes an index to have lots of Exxon and Apple stock. Fundamental weighting replaces some of that Apple stock with small companies. Weighting by anything that has little connection to company size (such as volatility) reduces the Exxon and Apple holdings by more than an order of magnitude. Both of those shifts exploit the benefits of investing in small-cap stocks.

Fundamental weighting outperforms most strategies. But inverting those weights adds slightly more than 1% per year to those already good returns. The only way that makes sense to me is if an inverse of market-cap weighting would also outperform fundamental weighting, by investing mostly in the smallest stocks.

They also show you can beat market-capitalization weighted indices by choosing stocks at random (i.e. simulating monkeys throwing darts at the list of companies). This highlights the perversity of weighting by market-caps, as the monkeys can’t beat the simple alternative of investing equal dollar amounts in each company.

This increases my respect for the size effect. I’ve reduced my respect for the benefits of low volatility investments, although the reduced risk they produce is still worth something. That hasn’t much changed my advice for investing in existing etf’s, but it does alter what I hope for in etf’s that will become available in the future.

[*] – They examine two different inverses:

  1. Taking the reciprocal of each stock’s original weight
  2. Taking the max(weight) and subtracting each stock’s original weight

In each case the resulting weights are then normalized to add to 1.

Book review: War! What Is It Good For?: Conflict and the Progress of Civilization from Primates to Robots, by Ian Morris.

This book’s main argument can be broken down into two ideas:

  1. War creates powerful leviathans and occasionally globocops.
  2. The resulting monopoly on the use of violence is important for (or necessary to) creating low-violence societies.

(2) overlaps a lot with Pinker’s The Better Angels of Our Nature. Pinker’s version is sufficiently better that reading Morris’ version adds little value.

(1) is an old idea (“war is the health of the state”) that seems mildly controversial in its stronger versions. But Morris is relatively cautious here, admitting that many wars were destructive.

He goes around labeling many wars as productive or not, in a way that had me wondering whether he thought that was observable while the wars were in progress. When he got to World War II, it became clear that he considered that at least sometimes impossible: World War I initially looked harmful (ruining Britain’s globocop status), but when seen in combination with World War II he is able to classify it as productive (enabling the US to become a globocop).

Morris sometimes hints at a stronger version of (1) that would say leviathans or equivalent civilizing institutions couldn’t have been created without war. Morris never attempts to make much of an argument for such a strong claim. He does provide some arguments for the hypothesis that wars sped up the creation of peace-keeping leviathans. Whether that makes some wars good depends heavily on what would have happened without those wars, and Morris provides little insight about that.

If Morris were interested in testing his claims, wouldn’t he have discussed Switzerland? Swiss involvement in war over the past 200 years seems to consist of just a civil war in November 1847 with fewer than 100 deaths. Morris’ beliefs seem to imply Switzerland has lots of violence, yet Swiss homicide rates are unusually low (lower than the rest of western Europe). Maybe responding sensibly to the threat of war provides the benefits that Morris talks about, with few of the costs?

Much the book’s claims seem reasonable: wars did have some tendency to create stronger leviathans, and those leviathans did have some peace-keeping benefits. Yet those claims don’t come close to demonstrating the existence of “productive war”.

Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy.

This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.

I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn’t strike me as someone with experience at writing useful code. His claim that “to increase their speed [AIs] will attempt to minimize the size of their source code” sounds like a misconception that wouldn’t occur to an experienced programmer. And his chapter “How to Prove You Invented Superintelligence So No One Else Can Steal It” seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn’t destroy the world.

I’m still puzzling over some of his novel suggestions for reducing AI risks. How would “convincing robots to worship humans as gods” differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?

Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities (abilities that Yampolskiy says are shown by peer-reviewed experiments associated with the Global Consciousness Project).

The style is also weird. Some chapters were previously published as separate papers, and weren’t adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.

The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).

Only read this if you’ve read other authors on this subject first.

Book review: Foragers, Farmers, and Fossil Fuels: How Human Values Evolve, by Ian Morris.

This book gives the impression that Morris had a halfway decent book in mind, but forgot to write down important parts of it.

He devotes large (possibly excessive) parts of the book to describing worldwide changes in what people value that correlate with the shifts to farming and then industry.

He convinces me that there’s some sort of connection between those values and how much energy per capita each society is able to use. He probably has a clue or two what that connection is, but the book failed to enlighten me about the connection.

He repeatedly claims that each age gets the thought that it needs. I find that about as reasonable as claiming that the widespread malnutrition associated with farming was what farming cultures needed. Indeed, his description of how farming caused gender inequality focuses on increased ability of men to inflict pain on women, and on increased incentives to do so. That sounds like a society made worse off, not getting what it needs.

He mentions (almost as an afterthought) some moderately interesting models of what caused specific changes in values as a result of the agricultural revolution.

He does an ok job of explaining the increased support for hierarchy in farming societies as an effect of the community size increasing past the Dunbar Number.

He attributes the reduced support for hierarchy in the industrial world to a need for interchangeable citizens. But he doesn’t document that increased need for interchangeability, and I’m skeptical that any such effect was strong. See The Institutional Revolution for a well thought out alternative model.

I had hoped to find some ideas about how to predict value changes that will result from the next big revolution. But I can’t figure out how to usefully apply his ideas to novel situations.

See also Robin Hanson’s review.

I use Beeminder occasionally. The site’s emails normally suffice to bug me into accomplishing whatever I’ve committed to doing. But I only use it for a few tasks for which my motivation is marginal. Most of the times that I consider using Beeminder, I either figure out how to motivate myself properly, or (more often) decide that my goal isn’t important.

The real value of Beeminder is that if I want to compel future-me to do something, I can’t give up by using the excuse that future-me is lazy or unreliable. Instead, I find myself wondering why I’m unwilling to risk $X to make myself likely to complete the task. That typically causes me to notice legitimate doubts about how highly I value the result.

Book review: The Sense of Structure: Writing from the Reader’s Perspective, by George D. Gopen.

The most important goal of this book is to teach writers how to analyze and influence which words in a sentence (or which sentences in a paragraph) readers will treat as most important.

Most of the advice is specific to writing. The confusion with which the book helps becomes much less important for spoken words that come with tone (to show emphasis) and pauses.

A secondary goal of the book is to explain how to organize sentences to minimize the reader’s need to hold information in working memory. For example, putting lots of words before the main subject and verb as this sentence does (unless you really want to slow the reader down, such as when telling someone they’re fired) is something he teaches us to avoid.

I found the explanations fairly clear and moderately surprising. Learning from them depends very heavily on repeated practice at rearranging words within sentences and evaluating how the changes affect readers’ reactions.

That practice feels like it requires lots of willpower. With decisions in some other contexts (e.g. what to eat or where to hike) I can comfortably hold several options in my short-term memory. But when I translate vague thoughts into words, I feel strongly anchored to whatever version I come up with first. And I often find it hard to decide what parts of a sentence I want to emphasize. But I’ve grown sufficiently dissatisfied with my writing style that I plan to pay enough attention while writing that I’ll learn to improve on my initial version.

Please give me feedback in a few months about whether my writing has become easier to read.