Science and Technology

I’d like to see more discussion of uploaded ape risks.

There is substantial disagreement over how fast an uploaded mind (em) would improve its abilities or the abilities of its progeny. I’d like to start by analyzing a scenario where it takes between one and ten years for an uploaded bonobo to achieve human-level cognitive abilities. This scenario seems plausible, although I’ve selected it more to illustrate a risk that can be mitigated than because of arguments about how likely it is.

I claim we should anticipate at least a 20% chance a human-level bonobo-derived em would improve at least as quickly as a human that uploaded later.

Considerations that weigh in favor of this are: that bonobo minds seem to be about as general-purpose as humans, including near-human language ability; and the likely ease of ems interfacing with other software will enable them to learn new skills faster than biological minds will.

The most concrete evidence that weighs against this is the modest correlation between IQ and brain size. It’s somewhat plausible that it’s hard to usefully add many neurons to an existing mind, and that bonobo brain size represents an important cognitive constraint.

I’m not happy about analyzing what happens when another species develops more powerful cognitive abilities than humans, so I’d prefer to have some humans upload before the bonobos become superhuman.

A few people worry that uploading a mouse brain will generate enough understanding of intelligence to quickly produce human-level AGI. I doubt that biological intelligence is simple / intelligible enough for that to work. So I focus more on small tweaks: the kind of social pressures which caused the Flynn Effect in humans, selective breeding (in the sense of making many copies of the smartest ems, with small changes to some copies), and faster software/hardware.

The risks seem dependent on the environment in which the ems live and on the incentives that might drive their owners to improve em abilities. The most obvious motives for uploading bonobos (research into problems affecting humans, and into human uploading) create only weak incentives to improve the ems. But there are many other possibilities: military use, interesting NPCs, or financial companies looking for interesting patterns in large databases. No single one of those looks especially likely, but with many ways for things to go wrong, the risks add up.

What could cause a long window between bonobo uploading and human uploading? Ethical and legal barriers to human uploading, motivated by risks to the humans being uploaded and by concerns about human ems driving human wages down.

What could we do about this risk?

Political activism may mitigate the risks of hostility to human uploading, but if done carelessly it could create a backlash which worsens the problem.

Conceivably safety regulations could restrict em ownership/use to people with little incentive to improve the ems, but rules that looked promising would still leave me worried about risks such as irresponsible people hacking into computers that run ems and stealing copies.

A more sophisticated approach is to improve the incentives to upload humans. I expect the timing of the first human uploads to be fairly sensitive to whether we have legal rules which enable us to predict who will own em labor. But just writing clear rules isn’t enough – how can we ensure political support for them at a time when we should expect disputes over whether they’re people?

We could also find ways to delay ape uploading. But most ways of doing that would also delay human uploading, which creates tradeoffs that I’m not too happy with (partly due to my desire to upload before aging damages me too much).

If a delay between bonobo and human uploading is dangerous, then we should also ask about dangers from other uploaded species. My intuition says the risks are much lower, since it seems like there are few technical obstacles to uploading a bonobo brain shortly after uploading mice or other small vertebrates.

But I get the impression that many people associated with MIRI worry about risks of uploaded mice, and I don’t have strong evidence that I’m wiser than they are. I encourage people to develop better analyses of this issue.

Book review: The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition, by Gregory Hickok.

This book criticizes hype from scientists and the media about embodied cognition, mirror neurons, and the differences between the left and right brain hemispheres. Popular accounts of these ideas contain a little bit of truth, but most versions either explain very little or provide misleading explanations.

A good deal of our cognition is embodied in the sense that it’s heavily dependent on sensory and motor activity. But we have many high-level thoughts that don’t fit this model well, such as those we generate when we don’t have sensory or motor interactions that are worth our attention (often misleading called a “resting state”).

Humans probably have mirror neurons. They have some value in helping us imitate others. But that doesn’t mean they have much affect on our ability to understand what we’re imitating. Our ability to understand a dog wagging its tail isn’t impaired by our inability to wag our tails. Parrots’ ability to imitate our speech isn’t very effective at helping them understand it.

Mirror neurons have also been used to promote the “broken mirror theory” of autism (with the suggestion that a malfunction related to mirror neurons impairs empathy). Hickok shows that the intense world hypothesis (which I’ve blogged about before) is more consistent with the available evidence.

The book clarified my understanding of the brain a bit. But most of it seems unimportant. I had sort of accepted mild versions of the mirror neuron and left-brain, right brain hype, but doing so didn’t have any obvious effects on my other beliefs or my actions. It was only at the book’s end (discussing autism) that I could see how the hype might matter.

Most of the ideas that he criticizes don’t do much harm, because they wouldn’t pay much rent if true. Identifying which neurons do what has negligible effect on how I model a person’s mind unless I’m doing something unusual like brain surgery.

One small part of the recent (June 2015) CFAR workshop caused a significant improvement in how I interact with people. I’ve become more spontaneous about interacting with people.

For several years I’ve suspected that I ought to learn how to do improv-style exercises, but standard improv classes felt ineffective. I’ve since figured out that their implied obligation for me to come up with something to say caused some sort of negative association with attempts at spontaneity when I failed to think of anything to say. That negative reaction was a large obstacle to learning new habits.

Deeply ingrained habits seem to cause some part of my subconscious mind that searches for ideas or generates words to decide that it can’t come up with anything worthy of conscious attention. That leaves me in a state that I roughly describe as a blank mind (i.e. either no verbal content at the conscious level, or I generate not-very-useful meta-thoughts reacting to the lack of appropriate words).

Since I much more frequently regret failing to say something than I regret mistakenly saying something hastily that I should have known not to say, it seems like I’ve got one or more subconscious filters that has consistently erred in being too cautious about generating speech. I tried introspecting for ways to simply tell that filter to be less cautious, but I accomplished nothing that way.

I also tried paying attention to signs that I’d filtered something out (pauses in my flow of words seem to be reliable indicators) in hopes that I could sometimes identify the discarded thoughts. I hoped to reward myself for noticing the ideas as the filter started to discard them, and train the filter to learn that I value conscious access to those ideas. Yet I never seem to detect those ideas, so that strategy failed.

What finally worked was that I practiced informal versions of improv exercises in which I rewarded myself [*] for saying silly things (alone or in a practice session with Robert) without putting myself in a situation where I felt an immediate obligation to say anything unusual.

In a few weeks I could tell that I was more confident in social contexts and more able to come up with things to say.

I feel less introverted, in the sense that a given amount of conversation tires me less than it used to. Blogging also seems to require a bit less energy.

I feel somewhat less anxiety (and relatedly, less distraction from background noise), maybe due to my increased social confidence.

I may have become slightly more creative in a variety of contexts.

I hypothesize that the filtering module was rather attached to a feeling of identity along the lines of “Peter is a person who is cautious about what he says” long after the consciously accessible parts of my mind decided I should weaken that identity. Actually trying out a different identity was more important to altering some beliefs that were deeply buried in my subconscious than was conscious choice about what to believe.

I wonder what other subconscious attachments to an identity are constraining me?

Something still seems missing from my social interactions: I still tend to feel passive and become just a spectator. That seems like a promising candidate for an area where I ought to alter some subconscious beliefs. But I find it harder to focus on a comfortable vision for an alternative identity: aiming to be a leader in a group conversation feels uncomfortable in a way that aiming to be spontaneous/creative never felt.

Thanks to John Salvatier and Anna Salamon for the advice that helped me accomplish this.

[*] – I only know how to do very weak self-rewards (telling myself to be happy), but that was all I needed.

Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy.

This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.

I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn’t strike me as someone with experience at writing useful code. His claim that “to increase their speed [AIs] will attempt to minimize the size of their source code” sounds like a misconception that wouldn’t occur to an experienced programmer. And his chapter “How to Prove You Invented Superintelligence So No One Else Can Steal It” seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn’t destroy the world.

I’m still puzzling over some of his novel suggestions for reducing AI risks. How would “convincing robots to worship humans as gods” differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?

Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities (abilities that Yampolskiy says are shown by peer-reviewed experiments associated with the Global Consciousness Project).

The style is also weird. Some chapters were previously published as separate papers, and weren’t adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.

The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).

Only read this if you’ve read other authors on this subject first.

I use Beeminder occasionally. The site’s emails normally suffice to bug me into accomplishing whatever I’ve committed to doing. But I only use it for a few tasks for which my motivation is marginal. Most of the times that I consider using Beeminder, I either figure out how to motivate myself properly, or (more often) decide that my goal isn’t important.

The real value of Beeminder is that if I want to compel future-me to do something, I can’t give up by using the excuse that future-me is lazy or unreliable. Instead, I find myself wondering why I’m unwilling to risk $X to make myself likely to complete the task. That typically causes me to notice legitimate doubts about how highly I value the result.

Book review: The Charisma Myth: How Anyone Can Master the Art and Science of Personal Magnetism, by Olivia Fox Cabane.

This book provides clear and well-organized instructions on how to become more charismatic.

It does not make the process sound easy. My experience with some of her suggestions (gratitude journalling and meditation) seems typical of her ideas – they took a good deal of attention, and probably caused gradual improvements in my life, but the effects were subtle enough to leave lots of uncertainty about how effective they were.

Many parts of the book talk as if more charisma is clearly better, but occasionally she talks about downsides such as being convincing even when you’re wrong. The chapter that distinguishes four types of charisma (focus, kindness, visionary, and authority) helped me clarify what I want and don’t want from charisma. Yet I still feel a good deal of conflict about how much charisma I want, due to doubts about whether I can separate the good from the bad. I’ve had some bad experiences in with feeling and sounding confident about investments in specific stocks has caused me to lose money by holding those stocks too long. I don’t think I can increase my visionary or authority charisma without repeating that kind of mistake unless I can somehow avoid talking about investments when I turn on those types of charisma.

I’ve been trying the exercises that are designed to boost self-compassion, but my doubts about the effort required for good charisma and about the desirability of being charismatic have limited the energy I’m willing to put into it.

Book review: Value-Focused Thinking: A Path to Creative Decisionmaking, by Ralph L. Keeney.

This book argues for focusing on values (goals/objectives) when making decisions, as opposed to the more usual alternative-focused decisionmaking.

The basic idea seems good. Alternative-focused thinking draws our attention away from our values and discourages us from creatively generating new possibilities to choose from. It tends to have us frame decisions as responses to problems, which leads us to associate decisions with undesirable emotions, when we could view decisions as opportunities.

A good deal of the book describes examples of good decisionmaking, but those rarely provide insight into how to avoid common mistakes or to do unusually well.

Occasionally the book switches to some dull math, without clear explanations of what benefit the rigor provides.

The book also includes good descriptions of how to measure the things that matter, but How to Measure Anything by Douglas Hubbard does that much better.

I recently got Bose QuietComfort 15 Acoustic Noise Cancelling Headphones.

I had previously tried passive earplugs and headphones that claimed 30 dB noise reduction, and got little value out of them.

The noise cancelling headphones suppress a good deal more train (BART) noise, enough that I’m now able to read nonfiction while on the train.

It won’t help with the situations where noise bothers me most (multiple conversations nearby) because it mainly eliminates predictable noises. It makes speech sound more distant without affecting the speech volume a lot. But reducing the cost of train and plane travel is valuable enough that I feel foolish about not having tried them earlier.

Book review: The Depths: The Evolutionary Origins of the Depression Epidemic, by Johnathan Rottenberg.

This book presents a clear explanation of why the basic outlines of depression look like an evolutionary adaptation to problems such as famine or humiliation. But he ignores many features that still puzzle me. Evolution seems unlikely to select for suicide. Why does loss of a child cause depression rather than some higher-energy negative emotion? What influences the breadth of learned helplessness?

He claims depression has been increasing over the last generation or so, but the evidence he presents can easily be explained by increased willingness to admit to and diagnose depression. He has at least one idea why it’s increasing (increased pressure to be happy), but I can come up with ideas that have the opposite effect (e.g. increased ease of finding a group where one can fit in).

Much of the book has little to do with the origins of depression, and is dominated by descriptions of and anecdotes about how depression works.

He spends a fair amount of time talking about the frequently overlooked late stages of depression recovery, where antidepressants aren’t much use and people can easily fall back into depression.

The book includes a bit of self-help advice to use positive psychology, and to not rely on drugs for much more than an initial nudge in the right direction.