Archives

All posts by Peter

Book review: Political Order and Political Decay, by Francis Fukuyama.

This book describes the rise of modern nation-states, from the French revolution to the present.

Fukuyama focuses on three features that influence national success: state (effective bureaucracy), rule of law, and autonomy (democratic accountability).

Much of the book argues against libertarian ideas from a fairly centrist perspective, although he mostly avoids directly discussing libertarian beliefs. Instead, he implies that we should de-emphasize debates over big government versus small government, and look more at effectiveness versus corruption (i.e. we should pull sideways).

Many of these ideas build on what Fukuyama wrote in Trust – I suggest reading that book first.

1.

War! What Is It Good For?. Fukuyama believes that war sometimes causes states to make their bureaucracy more efficient. Fukuyama is more credible here than Morris because Fukuyama is more cautious about the effects he claims to see.

The book suggests that young nations have some key stage where threat of conquest can create the right incentives for developing an efficient bureaucracy (i.e. without efficient support for the military, including effective taxation, they get absorbed into a state that does better at those tasks). Without such a threat, states can get stuck in an equilibrium where the bureaucracy simply serves a small number of powerful people. But with such a threat, politicians need to delegate enough authority that the bureaucracy develops some independence, which enables it to care about broader notions of national welfare. (Fukuyama talks as if the bureaucracies are somewhat altruistic. I think of it more as the bureaucracies caring about their long-term revenue source, when individual politicians don’t hold power long enough to care about the long term).

It seems plausible that China would have helped to lead the industrial revolution if it had faced a serious risk of being conquered in the 17th and 18th centuries. China’s relative safety back then seems to have left it complacent and stagnant.

2.

Fukuyama hints that the three pillars of modern nation-states (state, law, autonomy) have roughly equal importance.

Yet I don’t buy that. I expect that whatever virtues are responsible for the rule of law are a good deal more important than effective bureaucracies or democratic accountability.

Fukuyama doesn’t make a strong case for the value of democracy for national success, presumably in part because he expects most readers to already agree with him about that. I’ll conjecture that democracy is mostly a byproduct of success at the other features that Fukuyama considers important.

It’s likely that democracy is somewhat valuable for generating fairness, but that has limited relevance to what Fukuyama tries to explain (i.e. mainly power and wealth).

3.

Full-fledged rule of law might be needed to get all the benefits of the best modern societies. But the differences between good and bad nations seems to have originated well before those nations had more than a rudimentary version of rule of law.

That suggests some underlying factor that matters – maybe just the basic notion of law as something separate from individual leaders or ethnic groups (Fukuyama’s previous book says Christianity played an important role here); or maybe the kind of cultural advance suggested by Greg Clark.

Fukuyama argues that it’s risky to adopt democracy before creating effective states and the rule of law. He’s probably right to expect that such democracies will be dominated by people who fight to get the spoils of politics for their family / clan / ethnic group, with little thought to national wellbeing.

4.

National identity is important for producing the kind of government that Fukuyama likes. It’s hard for government employees to focus on the welfare of the nation if they identify mainly as members of a non-majority ethnic group.

He mentions that the printing press helped create national identities out of more fragmented cultures. This seems important enough to Europe’s success that it deserves more emphasis than the two paragraphs he devotes to it.

He describes several countries that started out as a patchwork of ethnic groups, and had differing degrees of success at developing a unified national identity: Tanzania, Kenya, Nigeria, and Indonesia. I was a bit disappointed that the differences there seemed to be mostly accidents of the personalities of leading politicians.

He talks as if the only two options for such regions were to develop a clear national identity or be crippled by ethnic conflict. Why not also consider the option of splitting into smaller political units that can aim to become city-states such as Singapore and Dubai?

5.

He makes many minor claims that sound suspicious enough for me to have moderate doubts about trusting his scholarship.

For example, he tries to refute claims that “industrial policy never works”, mainly by using the example of the government developing the internet. (His use of the word “never” suggests that he’s not exactly attacking the most sophisticated version of the belief in question). How familiar is he with the history of the internet? The entities in charge of internet tried to restrict commercial use until 1995. Actual commercial use of the internet started before the government made a clear decision to tolerate such use, much less endorse it. So Fukuyama either has a faulty understanding of internet history, or is using the phrase industrial policy in a way that puzzles me.

Then there’s the claim that the Spanish conquered important parts of the New World before the native nations had declined due to European diseases. Fukuyama seems unfamiliar with the contrary evidence reported by Charles C. Mann in 1491 and 1493. Mann may not be an ideal source, but he appears at least as reliable as the sources that Fukuyama cites.

6.

That leads into more general doubts about history books, especially ambitiously broad books aimed at popular audiences.

Tetlock’s research into the accuracy of political pundits has led me to assume that a broad range of “expert” commentary is roughly equivalent to random guessing. Much of what historians do [1] seems quite similar to the opinions of the experts that Tetlock studies. Neither historians nor political pundits get adequate feedback about mistaken beliefs, or get significant rewards for insights that are later confirmed by new evidence. That leads me to worry that the study of history is little better than voodoo.

7.

In sum, I can’t quite decide whether to recommend that you read this book.

[1] – I.e. drawing inferences from aggregations of data. That’s not to say that historians don’t devote lots of time to reporting observed facts. But most of those facts don’t have value to me unless I can generalize from them in ways that help me understand the future. Historian’s choices of what facts to emphasize will unavoidably influence any generalizations I draw.

Book review: Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, by Peter Godfrey-Smith.

This book describes some interesting mysteries, but provides little help at solving them.

It provides some pieces of a long-term perspective on the evolution of intelligence.

Cephalopods’ most recent common ancestor with vertebrates lived way back before the Cambrian explosion. Nervous systems back then were primitive enough that minds didn’t need to react to other minds, and predation was a rare accident, not something animals prepared carefully to cause and avoid.

So cephalopod intelligence evolved rather independently from most of the minds we observe. We could learn something about alien minds by understanding them.

Intelligence may even have evolved more than once in cephalopods – nobody seems to know whether octopuses evolved intelligence separately from squids/cuttlefish.

An octopus has a much less centralized mind than vertebrates do. Does an octopus have a concept of self? The book presents evidence that octopuses sometimes seem to think of their arms as parts of their self, yet hints that their concept of self is a good deal weaker than in humans, and maybe the octopus treats its arms as semi-autonomous entities.

2.

Does an octopus have color vision? Not via its photoreceptors the way many vertebrates do. Simple tests of octopuses’ ability to discriminate color also say no.

Yet octopuses clearly change color to camouflage themselves. They also change color in ways that suggest they’re communicating via a visual language. But to whom?

One speculative guess is that the color-producing parts act as color filters, with monochrome photoreceptors in the skin evaluating the color of the incoming light by how much the light is attenuated by the filters. So they “see” color with their skin, but not their eyes.

That would still leave plenty of mystery about what they’re communicating.

3.

The author’s understanding of aging implies that few organisms die of aging in the wild. He sees evidence in Octopuses that conflicts with this prediction, yet that doesn’t alert him to the growing evidence of problems with the standard theories of aging.

He says octopuses are subject to much predation. Why doesn’t this cause them to be scared of humans? He has surprising anecdotes of octopuses treating humans as friends, e.g. grabbing one and leading him on a ten-minute “tour”.

He mentions possible REM sleep in cuttlefish. That would almost certainly have evolved independently from vertebrate REM sleep, which must indicate something important.

I found the book moderately entertaining, but I was underwhelmed by the author’s expertise. The subtitle’s reference to “the Deep Origins of Consciousness” led me to expect more than I got.

I’ve recently noticed some possibly important confusion about machine learning (ML)/deep learning. I’m quite uncertain how much harm the confusion will cause.

On MIRI’s Intelligent Agent Foundations Forum:

If you don’t do cognitive reductions, you will put your confusion in boxes and hide the actual problem. … E.g. if neural networks are used to predict math, then the confusion about how to do logical uncertainty is placed in the black box of “what this neural net learns to do”

On SlateStarCodex:

Imagine a future inmate asking why he was denied parole, and the answer being “nobody knows and it’s impossible to find out even in principle” … (DeepMind employs a Go master to help explain AlphaGo’s decisions back to its own programmers, which is probably a metaphor for something)

A possibly related confusion, from a conversation that I observed recently: philosophers have tried to understand how concepts work for centuries, but have made little progress; therefore deep learning isn’t very close to human-level AGI.

I’m unsure whether any of the claims I’m criticizing reflect actually mistaken beliefs, or whether they’re just communicated carelessly. I’m confident that at least some people at MIRI are wise enough to avoid this confusion [1]. I’ve omitted some ensuing clarifications from my description of the deep learning conversation – maybe if I remembered those sufficiently well, I’d see that I was reacting to a straw man of that discussion. But it seems likely that some people were misled by at least the SlateStarCodex comment.

There’s an important truth that people refer to when they say that neural nets (and machine learning techniques in general) are opaque. But that truth gets seriously obscured when rephrased as “black box” or “impossible to find out even in principle”.
Continue Reading

Book review: Aging is a Group-Selected Adaptation: Theory, Evidence, and Medical Implications, by Joshua Mitteldorf.

This provocative book argues that our genes program us to age because aging provided important benefits.

I’ll refer here to antagonistic pleiotropy (AP) and programmed aging (PA) as the two serious contending hypotheses of aging. (Mutation accumulation used to be a leading hypothesis, but it seems discredited now, due to the number of age-related deaths seen in a typical species, and due to evidence that aging is promoted by some ancient genes).

Here’s a dumbed down version of the debate:
<theorist>: Hamilton proved that all conceivable organisms age due to AP and/or mutation accumulation.
<critic>: But the PA theories better predict how many die from aging, the effects of telomeres, calorie restriction, etc. Also, here’s some organisms with zero or negative aging …
<theorist>: A few anomalies aren’t enough to overturn a well-established theory. The well-known PA theories are obviously wrong because selfish genes would outbreed the PA genes.
<critic>: Here are some new versions which might explain how aging could enhance a species’ fitness …
<theorist>: I’ve read enough bad group-selection theories that I’m not going to waste my time with more of them.

That kind of reaction from theorists might make sense if AP was well established. But AP seems to have been well established only in the Darwinian sense of being firmly entrenched in scientists’ minds. It got entrenched mainly by being the least wrong of a flawed set of theories, combined with some poor communication between theorists and naturalists. Wikipedia has a surprisingly good[1] page on the evolution of aging that says:

Antagonistic pleiotropy is a prevailing theory today, but this is largely by default, and not because the theory has been well verified.

Continue Reading

Book review: Superforecasting: The Art and Science of Prediction, by Philip E. Tetlock and Dan Gardner.

This book reports on the Good Judgment Project (GJP).

Much of the book recycles old ideas: 40% of the book is a rerun of Thinking Fast and Slow, 15% of the book repeats Wisdom of Crowds, and 15% of the book rehashes How to Measure Anything. Those three books were good enough that it’s very hard to improve on them. Superforecasting nearly matches their quality, but most people ought to read those three books instead. (Anyone who still wants more after reading them will get decent value out of reading the last 4 or 5 chapters of Superforecasting).

The book’s style is very readable, using an almost Gladwell-like style (a large contrast to Tetlock’s previous, more scholarly book), at a moderate cost in substance. It contains memorable phrases, such as “a fox with the bulging eyes of a dragonfly” (to describe looking at the world through many perspectives).

Continue Reading

Book review: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich, Richard F. West and Maggie E. Toplak.

This book describes an important approach to measuring individual rationality: an RQ test that loosely resembles an IQ test. But it pays inadequate attention to the most important problems with tests of rationality.

Coachability

My biggest concern about rationality testing is what happens when people anticipate the test and are motivated to maximize their scores (as is the case with IQ tests). Do they:

  • learn to score high by “cheating” (i.e. learn what answers the test wants, without learning to apply that knowledge outside of the test)?
  • learn to score high by becoming more rational?
  • not change their score much, because they’re already motivated to do as well as their aptitudes allow (as is mostly the case with IQ tests)?

Alas, the book treats these issues as an afterthought. Their test knowingly uses questions for which cheating would be straightforward, such as asking whether the test subject believes in science, and whether they prefer to get $85 now rather than $100 in three months. (If they could use real money, that would drastically reduce my concerns about cheating. I’m almost tempted to advocate doing that, but doing so would hinder widespread adoption of the test, even if using real money added enough value to pay for itself.)

Continue Reading

Two and a half years ago, Eliezer was (somewhat plausibly) complaining that virtually nobody outside of MIRI was working on AI-related existential risks.

This year (at EAGlobal) one of MIRI’s talks was a bit hard to distinguish from an AI safety talk given by someone with pretty mainstream AI affiliations.

What happened in that time to cause that shift?

A large change was catalyzed by the publication of Superintelligence. I’ve been mildly disappointed about how little it affected discussions among people who were already interested in the topic. But Superintelligence caused a large change in how many people are willing to express concern over AI risks. That’s presumably because Superintelligence looks sufficiently academic and neutral to make many people comfortable about citing it, whereas similar arguments by Eliezer/MIRI didn’t look sufficiently prestigious within academia.

A smaller part of the change was MIRI shifting its focus somewhat to be more in line with how mainstream machine learning (ML) researchers expect AI to reach human levels.

Also, OpenAI has been quietly shifting in a more MIRI-like direction (I’m very unclear on how big a change this is). (Paul Christiano seems to deserve some credit for both the MIRI and OpenAI shifts in strategies.)

Given those changes, it seems like MIRI ought to be able to attract more donations than before. Especially since it has demonstrated evidence of increasing competence, and also because HPMoR seemed to draw significantly more people into the community of people who are interested in MIRI.

MIRI has gotten one big grant from OpenPhilanthropy that it probably couldn’t have gotten when mainstream AI researchers were treating MIRI’s concerns as too far-fetched to be worth commenting on. But donations from MIRI’s usual sources have stagnated.

That pattern suggests that MIRI was previously benefiting from a polarization effect, where the perception of two distinct “tribes” (those who care about AI risks versus those who promote AI) energized people to care about “their tribe”.

Whereas now there’s no clear dividing line between MIRI and mainstream researchers. Also, there’s lots of money going into other organizations that plan to do something about AI safety. (Most of those haven’t yet articulated enough of a strategy to make me optimistic that that money is well spent. I still endorse the ideas I mentioned last year in How much Diversity of AGI-Risk Organizations is Optimal?. I’m unclear on how much diversity of approaches we’re getting from the recent proliferation of AI safety organizations.)

That kind of pattern of donations creates perverse incentives to charities to at least market themselves as fighting a powerful group of people, rather than (as the ideal charity should be) addressing a neglected problem. Even if that marketing doesn’t distort a charity’s operations, the charity will be tempted to use counterproductive alarmism. AI risk organizations have resisted those temptations (at least recently), but it seems risky to tempt them.

That’s part of why I recently made a modest donation to MIRI, in spite of the uncertainty over the value of their efforts (I had last donated to them in 2009).

[Caveat: this post involves abstract theorizing whose relevance to practical advice is unclear. ]

What we call willpower mostly derives from conflicts between parts of our minds, often over what discount rate to use.

An additional source of willpower-like conflicts comes from social desirability biases.

I model the mind as having many mental sub-agents, each focused on a fairly narrow goal. Different goals produce different preferences for caring about the distant future versus caring only about the near future.

The sub-agents typically are as smart and sophisticated as a three year old (probably with lots of variation). E.g. my hunger-minimizing sub-agent is willing to accept calorie restriction days with few complaints now that I have a reliable pattern of respecting the hunger-minimizing sub-agent the next day, but complained impatiently when calorie restriction days seemed abnormal.

We have beliefs about how safe we are from near-term dangers, often reflected in changes to the autonomic nervous system (causing relaxation or the fight or flight reflex). Those changes cause quick, crude shifts in something resembling a global discount rate. In addition, each sub-agent has some ability to demand that it’s goals be treated fairly.

We neglect sub-agents whose goals are most long-term when many sub-agents say their goals have been neglected, and/or when the autonomic nervous system says immediate problems deserve attention.

Our willpower is high when we feel safe and are satisfied with our progress at short-term goals.

Social status

The time-discounting effects are sometimes obscured by social signaling.

Writing a will hints at health problems, whereas doing something about global warming can signal wealth. We have sub-agents that steer us to signal health and wealth, but without doing so in a deliberate enough way that people see that we are signaling. That leads us to exaggerate how much of our failure to write a will is due to the time-discounting type of low willpower.

Video games convince parts of our minds that we’re gaining status (in a virtual society) and/or training to win status-related games in real life. That satisfies some sub-agents who care about status. (Video games deceive us about status effects, but that has limited relevance to this post.) Yet as with most play, we suppress awareness of the zero-sum competitions we’re aiming to win. So we get confused about whether we’re being short-sighted here, because we’re pursuing somewhat long-term benefits, probably deceiving ourselves somewhat about them, and pretending not to care about them.

Time asymmetry?

Why do we feel an asymmetry in effects of neglecting distant goals versus neglecting immediate goals?

The fairness to sub-agents metaphor suggests that neglecting the distant future ought to produce emotional reactions comparable to what happens when we neglect the near future.

Neglecting the distant future does produce some discomfort that somewhat resembles willpower problems. If I spend lots of time watching TV, I end up feeling declining life-satisfaction, which tends to eventually cause me to pay more attention to long-term goals.

But the relevant emotions still don’t seem symmetrical.

One reason for asymmetry is that different goals imply different things for what constitutes neglecting a goal: neglecting sleep or food for a day implies something more unfair to the relevant sub-agents than does neglecting one’s career skills.

Another reason is that for both time-preference and social desirability conflicts, we have instincts that aren’t optimized for our current environment.

Our hunter-gatherer ancestors needed to devote most of their time to tasks that paid off within days, and didn’t know how to devote more than a few percent of their time to usefully preparing for events that were several years in the future. Our farmer ancestors needed to devote more time to 3-12 month planning horizons, but not much more than hunter-gatherers did. Today many of us can productively spend large fractions of our time on tasks (such as getting a college degree) that take more than 5 years to pay off. Social desirability biases show (less clear) versions of that same pattern.

That means we need to override our system 1 level heuristics with system 2 level analysis. That requires overriding the instinctive beliefs of some sub-agents about how much attention their goals deserve. Whereas the long-term goals we override to deal with hunger have less firmly established “rights” to fairness.

Also, there may be some fairness rules about how often system 2 can override system 1 agents – doing that too often may cause coalitions within system 1 to treat system 2 as a politician who has grabbed too much power. [Does this explain decision fatigue? I’m unsure.]

Other Models of Willpower

The depletion model

Willpower depletion captures a nontrivial effect of key sub-agents rebelling when their goals have been overlooked for too long.

But I’m confused – the depletion model doesn’t seem like it’s trying to be a complete model of willpower. In particular, it either isn’t trying explain evolutionary sources of willpower problems, or is trying to explain it via the clearly inadequate claim that willpower is a simple function of current blood glucose levels.

It would be fine if the depletion model were just a heuristic that helped us develop more willpower. But if anything it seems more likely to reduce willpower.

Kurzban’s opportunity costs model

Kurzban et al. have a model involving the opportunity costs of using cognitive resources for a given task.

It seems more realistic than most models I’ve seen. It describes some important mental phenomena more clearly than I can, but doesn’t quite seem to be about willpower. In particular, it seems uninformative about differing time horizons. Also, it focuses on cognitive resource constraints, whereas I’d expect some non-cognitive resource constraints to be equally important.

Ainslie’s Breakdown of Will

George Ainslie wrote a lot about willpower, describing it as intertemporal bargaining, with hyperbolic discounting. I read that book 6 years ago, but don’t remember it very clearly, and I don’t recall how much it influenced my current beliefs. I think my model looks a good deal like what I’d get if I had set out to combine the best parts of Ainslie’s ideas and Kurzban’s ideas, but I wrote 90% of this post before remembering that Ainslie’s book was relevant.

Ainslie apparently wrote his book before it became popular to generate simple models of willpower, so he didn’t put much thought into comparing his views to others.

Hyperbolic discounting seems to be a real phenomenon that would be sufficient to cause willpower-like conflicts. But I’m unclear on why it should be a prominent part of a willpower model.

Distractible

This “model” isn’t designed to say much beyond pointing out that willpower doesn’t reliably get depleted.

Hot/cool

A Hot/cool-system model sounds like an attempt to generalize the effects of the autonomic nervous system to explain all of willpower. I haven’t found it to be very informative.

Muscle

Some say that willpower works like a muscle, in that using it strengthens it.

My model implies that we should expect this result when preparing for the longer-term future causes our future self to be safer and/or to more easily satisfy near-term goals.

I expect this effect to be somewhat observable with using willpower to save money, because having more money makes us feel safer and better able to satisfy our goals.

I expect this effect to be mostly absent after using willpower to loose weight or to write a will, since those produce benefits which are less intuitive and less observable.

Why do drugs affect willpower?

Scott at SlateStarCodex asks why drugs have important effects on willpower.

Many drugs affect the autonomic nervous system, thereby influencing our time preferences. I’d certainly expect that drugs which reduce anxiety will enable us to give higher priority to far future goals.

I expect stimulants make us feel less concern about depleting our available calories, and less concern about our need for sleep, thereby satisfying a few short-term sub-agents. I expect this to cause small increases in willpower.

But this is probably incomplete. I suspect the effect of SSRIs on willpower varies quite widely between people. I suspect that’s due to an anti-anxiety effect which increases willpower, plus an anti-obsession effect which reduces willpower in a way that my model doesn’t explain.

And Scott implies that some drugs have larger effects on willpower than I can explain.

My model implies that placebos can be mildly effective at increasing willpower, by convincing some short-sighted sub-agents that resources are being applied toward their goals. A quick search suggests this prediction has been poorly studied so far, with one low-quality study confirming this.

Conclusion

I’m more puzzled than usual about whether these ideas are valuable. Is this model profound, or too obvious to matter?

I presume part of the answer is that people who care about improving willpower care less about theory, and focus on creating heuristics that are easy to apply.

CFAR does a decent job of helping people develop more willpower, not by explaining a clear theory of what willpower is, but by focusing more on how to resolve conflicts between sub-agents.

And I recommend that most people start with practical advice, such as the advice in The Willpower Instinct, and worry about theory later.

Book review: Doing Good Better, by William MacAskill.

This book is a simple introduction to the Effective Altruism movement.

It documents big differences between superficially plausible charities, and points out how this implies big benefits to the recipients of charity from donors paying more attention to the results that a charity produces.

How effective is the book?

Is it persuasive?

Probably yes, for a small but somewhat important fraction of the population who seriously intend to help distant strangers, but have procrastinated about informing themselves about how to do so.

Does it focus on a neglected task?

Not very neglected. It’s mildly different from similar efforts such as GiveWell’s website and Reinventing Philanthropy, in ways that will slightly reduce the effort needed to understand the basics of Effective Altruism.

Will it make people more altruistic?

Not very much. It mostly seems to assume that people have some fixed level of altruism, and focuses on improving the benefits that result from that altruism. Maybe it will modestly redirect peer pressure toward making people more altruistic.

Will it make readers more effective?

Probably. For people who haven’t given much thought to these topics, the book’s advice is a clear improvement over standard habits. It will be modestly effective at promoting a culture where charitable donations that save lives are valued more highly than donations which accomplish less.

But I see some risk that it will make people overconfident about the benefits of the book’s specific strategies. An ideal version of the book would instead inspire people to improve on the book’s analysis.

The book provides evidence that donors rarely pay attention to how much good a charity does. Yet it avoids asking why. If you pay attention, you’ll see hints that donors are motivated mainly by the desire to signal something virtuous about themselves (for example, see the book’s section on moral licensing). In spite of that, the book consistently talks as if donors have good intentions, and only need more knowledge to be better altruists.

The book is less rigorous than I had hoped. I’m unsure how much of that is due to reasonable attempts to simplify the message so that more people can understand it with minimal effort.

In a section on robustness of evidence, the book describes this “sanity check”:

“if it cost ten dollars to save a life, then we’d have to suppose that they or their family members couldn’t save up for a few weeks, or take out a loan, in order to pay for the lifesaving product.”

I find it confusing to use this as a sanity check, because it’s all too easy to imagine that many people are in desperate enough conditions that they’re spending their last dollar to avoid starvation.

The book alternates between advocating doing more good (satisficing), and advocating the most possible good (optimizing). In practice, it mostly focuses on safe ways to produce fairly good results.

The book barely mentions existential risks. If it were literally trying to advocate doing the most good possible, it would devote a lot more attention to affecting the distant future. But that’s much harder to do well than what the book does focus on (saving a few more lives in Africa over the next few years), and would involve acts of charity that have small probabilities of really large effects on people who are not yet born.

If you’re willing to spend 50-100 hours (but not more) learning how to be more effective with your altruism, then reading this book is a good start.

But people who are more ambitious ought to be able to make a bigger difference to the world. I encourage those people to skip this book, and focus more on analyzing existential risks.

The stock market reaction to the election was quite strange.

From the first debate through Tuesday, S&P 500 futures showed modest signs of believing that Trump was worse for the market than Clinton. This Wolfers and Zitzewitz study shows some of the relevant evidence.

On Tuesday evening, I followed the futures market and the prediction markets moderately closely, and it looked like there was a very clear correlation between those two markets, strongly suggesting the S&P 500 would be 6 to 8 percent lower under Trump than under Clinton. This correlation did not surprise me.

This morning, the S&P 500 prices said the market had been just kidding last night, and that Trump is neutral or slightly good for the market.

Part of this discrepancy is presumably due to the difference between regular trading hours and after hours trading. The clearest evidence for market dislike of Trump came from after hours trading, when the most sophisticated traders are off-duty. I’ve been vaguely aware that after hours markets are less efficiently priced. But this appears to involve at least a few hundred million dollars of potential profit, which somewhat stretches the limit of how inefficient the markets could plausibly be.

I see one report of Carl Icahn claiming

I thought it was absurd that the market, the S&P was down 100 points on Trump getting elected … but I couldn’t put more than about a billion dollars to work

I’m unclear what constrained him, but it sure looked like the market could have absorbed plenty more buying while I was watching (up to 10pm PST), so I’ll guess he was more constrained by something related to him being at a party.

But even if the best U.S. traders were too distracted to make the markets efficient, that leaves me puzzled about asian markets, which were down almost as much as the U.S. market during the middle of the asian day.

So it’s hard to avoid the conclusion that the market either made a big irrational move, or was reacting to news whose importance I can’t recognize.

I don’t have a strong opinion on which of the market reactions was correct. My intuition says that a market decline of anywhere from 1% to 5% would have been sensible, and I’ve made a few trades reflecting that opinion. I expect that market reactions to news tend to get more rational over time, so I’m now giving a fair amount of weight to the possibility that Trump won’t affect stocks much.