Book review: The Age of Em: Work, Love and Life when Robots Rule the Earth, by Robin Hanson.

This book analyzes a possible future era when software emulations of humans (ems) dominate the world economy. It is too conservative to tackle longer-term prospects for eras when more unusual intelligent beings may dominate the world.

Hanson repeatedly tackles questions that scare away mainstream academics, and gives relatively ordinary answers (guided as much as possible by relatively standard, but often obscure, parts of the academic literature).

Assumptions

Hanson’s scenario relies on a few moderately controversial assumptions. The assumptions which I find most uncertain are related to human-level intelligence being hard to understand (because it requires complex systems), enough so that ems will experience many subjective centuries before artificial intelligence is built from scratch. For similar reasons, ems are opaque enough that it will be quite a while before they can be re-engineered to be dramatically different.

Hanson is willing to allow that ems can be tweaked somewhat quickly to produce moderate enhancements (at most doubling IQ) before reaching diminishing returns. He gives somewhat plausible reasons for believing this will only have small effects on his analysis. But few skeptics will be convinced.

Some will focus on potential trillions of dollars worth of benefits that higher IQs might produce, but that wealth would not much change Hanson’s analysis.

Others will prefer an inside view analysis which focuses on the chance that higher IQs will better enable us to handle risks of superintelligent software. Hanson’s analysis implies we should treat that as an unlikely scenario, but doesn’t say what we should do about modest probabilities of huge risks.

Another way that Hanson’s assumptions could be partly wrong is if tweaking the intelligence of emulated Bonobos produces super-human entities. That seems to only require small changes to his assumptions about how tweakable human-like brains are. But such a scenario is likely harder to analyze than Hanson’s scenario, and it probably makes more sense to understand Hanson’s scenario first.

Wealth

Wages in this scenario are somewhat close to subsistence levels. Ems have some ability to restrain wage competition, but less than they want. Does that mean wages are 50% above subsistence levels, or 1%? Hanson hints at the former. The difference feels important to me. I’m concerned that sound-bite versions of book will obscure the difference.

Hanson claims that “wealth per em will fall greatly”. It would be possible to construct a measure by which ems are less wealthy than humans are today. But I expect it will be at least as plausible to use a measure under which ems are rich compared to humans of today, but have high living expenses. I don’t believe there’s any objective unit of value that will falsify one of those perspectives [1].

Style / Organization

The style is more like a reference book than a story or an attempt to persuade us of one big conclusion. Most chapters (except for a few at the start and end) can be read in any order. If the section on physics causes you to doubt whether the book matters, skip to chapter 12 (labor), and return to the physics section later.

The style is very concise. Hanson rarely repeats a point, so understanding him requires more careful attention than with most authors.

It’s odd that the future of democracy gets less than twice as much space as the future of swearing. I’d have preferred that Hanson cut out a few of his less important predictions, to make room for occasional restatements of important ideas.

Many little-known results that are mentioned in the book are relevant to the present, such as: how the pitch of our voice affects how people perceive us, how vacations affect productivity, and how bacteria can affect fluid viscosity.

I was often tempted to say that Hanson sounds overconfident, but he is clearly better than most authors at admitting appropriate degrees of uncertainty. If he devoted much more space to caveats, I’d probably get annoyed at the repetition. So it’s hard to say whether he could have done any better.

Conclusion

Even if we should expect a much less than 50% chance of Hanson’s scenario becoming real, it seems quite valuable to think about how comfortable we should be with it and how we could improve on it.

Footnote

[1] – The difference matters only in one paragraph, where Hanson discusses whether ems deserve charity more than do humans living today. Hanson sounds like he’s claiming ems deserve our charity because they’re poor. Most ems in this scenario are comfortable enough for this to seem wrong.

Hanson might also be hinting that our charity would be effective at increasing the number of happy ems, and that basic utilitarianism says that’s preferable to what we can do by donating to today’s poor. That argument deserves more respect and more detailed analysis.

Book review: Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World, by Leslie Valiant.

This book provides some nonstandard perspectives on machine learning and evolution, but doesn’t convince me there’s much advantage to using those perspectives. I’m unsure how much of that is due to his mediocre writing style. He often seems close to saying something important, but never gets there.

He provides a rigorous meaning for the concept of learnability. I suppose that’s important for something, but I can’t recall what.

He does an ok job of explaining how evolution is a form of learning, but Eric Baum’s book What is Thought? explains that idea much better.

The last few chapters, where he drifts farther from his areas of expertise, are worse. Much of what he says there only seems half-right at best.

One example is his suggestion that AI researchers ought to put a lot of thought into how teaching materials are presented (similar to how schools are careful to order a curriculum, from simple to complex concepts). I doubt that that reflects a reasonable model of human learning: children develop an important fraction of their intelligence before school age, with little guidance for the order in which they should learn concepts (cf. Piaget’s theory of cognitive development); and unschooled children seem to choose their own curriculum.

My impression of recent AI progress suggests that a better organized “curriculum” is even farther from being cost-effective there – progress seems to be coming more from better ways of incorporating unsupervised learning.

I’m left wondering why anyone thinks the book is worth reading.

Book review: The Midas Paradox: Financial Markets, Government Policy Shocks, and the Great Depression, by Scott B Sumner.

This is mostly a history of the two depressions that hit the U.S. in the 1930s: one international depression lasting from late 1929 to early 1933, due almost entirely to problems with an unstable gold exchange standard; quickly followed by a more U.S.-centered depression that was mainly caused by bad labor market policies.

It also contains some valuable history of macroeconomic thought, doing a fairly good job of explaining the popularity of theories that are designed for special cases (such as monetarism and Keynes’ “general” theory).

I was surprised at how much Sumner makes the other books on this subject that I’ve read seem inadequate.
Continue Reading

Book review: The Human Advantage: A New Understanding of How Our Brain Became Remarkable, by Suzana Herculano-Houzel.

I used to be uneasy about claims that the human brain was special because it is large for our body size: relative size just didn’t seem like it could be the best measure of whatever enabled intelligence.

At last, Herculano-Houzel has invented a replacement for that measure. Her impressive technique for measuring the number of neurons in a brain has revolutionized this area of science.

We can now see an important connection between the number of cortical neurons and cognitive ability. I’m glad that the book reports on research that compares the cognitive abilities of enough species to enable moderately objective tests of the relevant hypotheses (although the research still has much room for improvement).

We can also see that the primate brain is special, in a way that enables large primates to be smarter than similarly sized nonprimates. And that humans are not very special for a primate of our size, although energy constraints make it tricky for primates to reach our size.

I was able to read the book quite quickly. Much of it is arranged in an occasionally suspenseful story about how the research was done. It doesn’t have lots of information, but the information it does have seems very new (except for the last two chapters, where Herculano-Houzel gets farther from her area of expertise).

Book review: The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, by Joseph Henrich.

This book provides a clear explanation of how an ability to learn cultural knowledge made humans evolve into something unique over the past few million years. It’s by far the best book I’ve read on human evolution.

Before reading this book, I thought human uniqueness depended on something somewhat arbitrary and mysterious which made sexual selection important for human evolution, and wondered whether human language abilities depended on some lucky mutation. Now I believe that the causes of human uniqueness were firmly in place 2-3 million years ago, and the remaining arbitrary events seem much farther back on the causal pathway (e.g. what was unique about apes? why did our ancestors descend from trees 4.4 million years ago? why did the climate become less stable 3 million years ago?)

Human language now seems like a natural byproduct of previous changes, and probably started sooner (and developed more gradually) than many researchers think.

I used to doubt that anyone could find good evidence of cultures that existed millions of years ago. But Henrich provides clear explanations of how features such as right-handedness and endurance running demonstrate important milestones in human abilities to generate culture.

Henrich’s most surprising claim is that there’s an important sense in which individual humans are no smarter than other apes. Our intellectual advantage over apes is mostly due to a somewhat special-purpose ability to combine our individual brains into a collective intelligence. His evidence on this point is weak, but it’s plausible enough to be interesting.

Henrich occasionally exaggerates a bit. The only place where that bothered me was where he claimed that heart attack patients who carefully adhered to taking placebos were half as likely to die as patients who failed to reliably take placebos. The author wants to believe that demonstrates the power of placebos. I say the patient failure to take placebos was just a symptom of an underlying health problem (dementia?).

I’m a bit surprised at how little Robin Hanson says about the Henrich’s main points. Henrich suggests that there’s cultural pressure to respect high-status people, for reasons that are somewhat at odds with Robin’s ally/coalition based reasons. Henrich argues that knowledge coming from high-status people, at least in hunter-gatherer societies, tended to be safer than knowledge from more directly measurable evidence. The cultural knowledge that accumulates over many generations aggregates information that could not be empirically acquired in a short time.

So Henrich implies it’s reasonable for people to be confused about whether evidence based medicine embodies more wisdom than eminence based medicine. Traditional culture has become less valuable recently due to the rapid changes in our environment (particularly the technology component of our environment), but cultures that abandoned traditions too readily were often hurt by consequences which take decades to observe.

I got more out of this book than a short review can describe (such as “How Altruism is like a Chili Pepper”). Here’s a good closing quote:

we are smart, but not because we stand on the shoulders of giants or are giants ourselves. We stand on the shoulders of a very large pyramid of hobbits.

[See my previous post for context.]

I started out to research and write a post on why I disagreed with Scott Sumner about NGDP targeting, and discovered an important point of agreement: targeting nominal wages forecasts would probably be better than targeting either NGDP or CPI forecasts.

One drawback to targeting something other than CPI forecasts is that we’ve got good market forecasts of the CPI. It’s certainly possible to create markets to forecast other quantities that the Fed might target, but we don’t have a good way of predicting how much time and money those will require.

Problems with NGDP targets

The main long-term drawback to targeting NGDP (or other measures that incorporate the quantity of economic activity) rather than an inflation-like measure is that it’s quite plausible to have large changes in the trend of increasing economic activity.

We could have a large increase in our growth rate due to a technology change such as uploaded minds (ems). NGDP targeting would create unpleasant deflation in that scenario until the Fed figured out how to adjust to new NGDP targets.

I can also imagine a technology-induced slowdown in economic growth, for example: a switch to open-source hardware for things like food and clothing (3-d printers using open-source designs) could replace lots of transactions with free equivalents. That would mean a decline in NGDP without a decline in living standards. NGDP targeting would respond by creating high inflation. (This scenario seems less likely and less dangerous than the prior scenario).

Basil Halperin has some historical examples where NGDP targeting would have produced similar problems.

Problems with inflation forecasts?

Critics of inflation targeting point to problems associated with oil shocks or with strange ways of calculating housing costs. Those cause many inflation measures to temporarily diverge from what I want the Fed to focus on, which is the problem of sticky wages interacting with weak nominal wages to create unnecessary unemployment.

Those problems with measuring inflation are serious if the Fed uses inflation that has already happened or uses forecasts of inflation that extend only a few months into the future.

Instead, I recommend using multi-year CPI forecasts based on several different time periods (e.g. in the 2 to 10 year range), and possibly forecasts for time periods that start a year or so in the future (this series shows how to infer such forecasts from existing markets). In the rare case where forecasts for different time periods say conflicting things about whether the Fed is too tight or loose, I’d encourage the Fed to use its judgment about which to follow.

The multi-year forecasts have historically shown only small reactions to phenomena such as the large spike in oil prices in mid 2008. I expect that pattern to continue: commodity price spikes happen when markets get evidence of their causes/symptoms (due to market efficiency), not at predictable future times. The multi-year forecasts typically tell us mainly whether the Fed will persistently miss its target.

Won’t using those long-term forecasts enable the Fed to make mistakes that it corrects (or over-corrects) for shorter time periods? Technically yes, but that doesn’t mean the Fed has a practical way to do that. It’s much easier for the Fed to hit its target if demand for money is predictable. Demand for money is more predictable if the value of money is more predictable. That’s one reason why long-term stability of inflation (or of wages or NGDP) implies short-term stability.

It would be a bit safer to target nominal wage rate forecasts rather than CPI forecasts if we had equally good markets forecasting both. But I expect it to be easier to convince the public to trust markets that are heavily traded for other reasons, than it is to get them to trust a brand new market of uncertain liquidity.

NGDP targeting has been gaining popularity recently. But targeting market-based inflation forecasts will be about as good under most conditions [1], and we have good markets that forecast the U.S. inflation rate [2].

Those forecasts have a track record that starts in 2003. The track record seems quite consistent with my impressions about when the Fed should have adopted a more inflationary policy (to promote growth and to get inflation expectations up to 2% [3]) and when it should have adopted a less inflationary policy (to avoid fueling the housing bubble). It’s probably a bit controversial to say that the Fed should have had a less inflationary policy from February through July or August of 2008. But my impression (from reading the stock market) is that NGDP futures would have said roughly the same thing. The inflation forecasts sent a clear signal starting in very early September 2008 that Fed policy was too tight, and that’s about when other forms of hindsight switch from muddled to saying clearly that Fed policy was dangerously tight.

Why do I mention this now? The inflation forecast dropped below 1 percent two weeks ago for the first time since May 2008. So the Fed’s stated policies conflict with what a more reputable source of information says the Fed will accomplish. This looks like what we’d see if the Fed was in the process of causing a mild recession to prevent an imaginary increase in inflation.

What does the Fed think it’s doing?

  • It might be relying on interest rates to estimate what it’s policies will produce. Interest rates this low after 6.5 years of economic expansion resemble historical examples of loose monetary policy more than they resemble the stereotype of tight monetary policy [4].
  • The Fed could be following a version of the Taylor Rule. Given standard guesses about the output gap and equilibrium real interest rate [5], the Taylor Rule says interest rates ought to be rising now. The Taylor Rule has usually been at least as good as actual Fed policy at targeting inflation indirectly through targeting interest rates. But that doesn’t explain why the Fed targets interest rates when that conflicts with targeting market forecasts of inflation.
  • The Fed could be influenced by status quo bias: interest rates and unemployment are familiar types of evidence to use, whereas unbiased inflation forecasts are slightly novel.
  • Could the Fed be reacting to money supply growth? Not in any obvious way: the monetary base stopped growing about two years ago, M1 and MZM growth are slowing slightly, and M2 accelerated recently (but only after much of the Fed’s tightening).

Scott Sumner’s rants against reasoning from interest rates explain why the Fed ought to be embarrassed to use interest rates to figure out whether Fed policy is loose or tight.

Yet some institutional incentives encourage the Fed to target interest rates rather than predicted inflation. It feels like an appropriate use of high-status labor to set interest rates once every few weeks based on new discussion of expert wisdom. Switching to more or less mechanical responses to routine bond price changes would undercut much of the reason for believing that the Fed’s leaders are doing high-status work.

The news media storytellers would have trouble finding entertaining ways of reporting adjustments that consisted of small hourly responses to bond market changes. Whereas decisions made a few times per year are uncommon enough to be genuinely newsworthy. And meetings where hawks struggle against doves fit our instinctive stereotype for important news better than following a rule does. So I see little hope that storytellers will want to abandon their focus on interest rates. Do the Fed governors follow the storytellers closely enough that the storytellers’ attention strongly affects the Fed’s attention? Would we be better off if we could ban the Fed from seeing any source of daily stories?

Do any other interest groups prefer stable interest rates over stable inflation rates? I expect a wide range of preferences among Wall Street firms, but I’m unaware which preferences are dominant there.

Consumers presumably prefer that their banks, credit cards, etc have predictable interest rates. But I’m skeptical that the Fed feels much pressure to satisfy those preferences.

We need to fight those pressures by laughing at people who claim that the Fed is easing when markets predict below-target inflation (as in the fall of 2008) or that the Fed is tightening when markets predict above-target inflation (e.g. much of 2004).

P.S. – The risk-reward ratio for the stock market today is much worse than normal. I’m not as bearish as I was in October 2008, but I’ve positioned myself much more cautiously than normal.

Notes:

[1] – They appear to produce nearly identical advice under most conditions that the U.S. has experienced recently.

I expect inflation targeting to be modestly safer than NGDP targeting. I may get around to explaining my reasons for that in a separate post.

[2] – The link above gives daily forecasts of the 5 year CPI inflation rate. See here for some longer time periods.

The markets used to calculate these forecasts have enough liquidity that it would be hard for critics to claim that they could be manipulated by entities less powerful than the Fed. I expect some critics to claim that anyway.

[3] – I’m accepting the standard assumption that 2% inflation is desirable, in order to keep this post simple. Figuring out the optimal inflation rate is too hard for me to tackle any time soon. A predictable inflation rate is clearly desirable, which creates some benefits to following a standard that many experts agree on.

[4] – providing that you don’t pay much attention to Japan since 1990.

[5] – guesses which are error-prone and, if a more direct way of targeting inflation is feasible, unnecessary. The conflict between the markets’ inflation forecast and the Taylor Rule’s implication that near-zero interest rates would cause inflation to rise suggests that we should doubt those guesses. I’m pretty sure that equilibrium interest rates are lower than the standard assumptions. I don’t know what to believe about the output gap.

Book review: The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century, by Steven Pinker.

Pinker provides great examples of readable writing, and insights about what styles are easy to read.

But the book is more forgetable than Sense of Structure (which covers similar subjects). Sense of Structure is more valuable because it’s more oriented toward training its readers.

Sense of Structure focuses on how to improve mediocre sentences that I might have been tempted to write. Pinker devotes a bit too much attention to making fun of bad sentences that don’t hold my attention because they don’t look similar enough to mediocre sentences which I might write.

There difference in style between the two books is modest, but modest differences matter for tasks such as this which take a good deal of willpower to master.

My first year of eating no factory farmed vertebrates went fairly well.

When eating at home, it took no extra cost or effort to stick to the diet.

I’ve become less comfortable eating at restaurants, because I find few acceptable choices at most restaurants, and because poor labeling has caused me to mistakenly get food that wasn’t on my diet.

The constraints were strict enough that I lost about 4 pounds during 8 days away from home over the holidays. That may have been healthier than the weight gain I succumbed to during similar travels in prior years, but that weight loss is close to the limit of what I find comfortable.

In theory, I should have gotten enough flexibility from my rule to allow 120 calories per month of unethical animal products for me to be mostly comfortable with restaurant food. In practice, I found it psychologically easier to adopt an identity of someone who doesn’t eat any factory farmed vertebrates than it would have been to feel comfortable using up the 120 calorie quota. That made me reluctant to use any flexibility.

The quota may have been valuable for avoiding a feeling of failure when I made mistakes.

Berkeley is a relatively easy place to adopt this diet, thanks to Marin Sun Farms and Mission Heirloom. Pasture-raised eggs are fairly easy to find in the bay area (Berkeley Bowl, Whole Foods, etc).

I still have some unresolved doubts about how much to trust labels. Pasture-raised eggs are available in Colorado in winter, but chicken meat is reportedly unavailable due to weather-related limits on keeping chickens outdoors. Why doesn’t that reasoning also apply to eggs?

I’m still looking for a good substitute for Questbars. These come closest:

For most people, it would be hard enough to follow my diet strictly that I recommend starting with an easier version. One option would be to avoid factory farmed chicken/eggs (i.e. focus on the avoiding the cruelest choices). And please discriminate against restaurants that don’t label their food informatively.

I plan to continue my diet essentially unchanged, with maybe slightly less worry about what I eat when traveling or at parties.

Connectomes are not sufficient by themselves to model brain behavior. Brain modeling has been limited more by the need for good information about the dynamic behavior of individual neurons.

The paper Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans looks like an important step toward overcoming this limitation. The authors observed the behavior of many individual neurons in a moving nematode.

They still can’t reliably map the neurons they observed to standard C. elegans neuron names:

The neural position validation experiments presented here, however, have led us to conclude that worm-to-worm variability in neuronal position in the head is large enough to pose a formidable challenge for neuron identification.

But there are enough hints about which neurons do what that I’m confident this problem can be solved if enough effort is devoted to it.

My biggest uncertainty concerns applying this approach to mammalian brains. Mammalian brains aren’t transparent enough to be imaged this way. Are C. elegans neurons similar enough that we can just apply the same models to both? I suspect not.