bias

All posts tagged bias

Book review: Inadequate Equilibria, by Eliezer Yudkowsky.

This book (actually halfway between a book and a series of blog posts) attacks the goal of epistemic modesty, which I’ll loosely summarize as reluctance to believe that one knows better than the average person.

1.

The book starts by focusing on the base rate for high-status institutions having harmful incentive structures, charting a middle ground between the excessive respect for those institutions that we see in mainstream sources, and the cynicism of most outsiders.

There’s a weak sense in which this is arrogant, namely that if were obvious to the average voter how to improve on these problems, then I’d expect the problems to be fixed. So people who claim to detect such problems ought to have decent evidence that they’re above average in the relevant skills. There are plenty of people who can rationally decide that applies to them. (Eliezer doubts that advising the rest to be modest will help; I suspect there are useful approaches to instilling modesty in people who should be more modest, but it’s not easy). Also, below-average people rarely seem to be attracted to Eliezer’s writings.

Later parts of the book focus on more personal choices, such as choosing a career.

Some parts of the book seem designed to show off Eliezer’s lack of need for modesty – sometimes successfully, sometimes leaving me suspecting he should be more modest (usually in ways that are somewhat orthogonal to his main points; i.e. his complaints about “reference class tennis” suggest overconfidence in his understanding of his debate opponents).

2.

Eliezer goes a bit overboard in attacking the outside view. He starts with legitimate complaints about people misusing it to justify rejecting theory and adopt “blind empiricism” (a mistake that I’ve occasionally made). But he partly rejects the advice that Tetlock gives in Superforecasting. I’m pretty sure Tetlock knows more about this domain than Eliezer does.

E.g. Eliezer says “But in novel situations where causal mechanisms differ, the outside view fails—there may not be relevantly similar cases, or it may be ambiguous which similar-looking cases are the right ones to look at.”, but Tetlock says ‘Nothing is 100% “unique” … So superforecasters conduct creative searches for comparison classes even for seemingly unique events’.

Compare Eliezer’s “But in many contexts, the outside view simply can’t compete with a good theory” with Tetlock’s commandment number 3 (“Strike the right balance between inside and outside views”). Eliezer seems to treat the approaches as antagonistic, whereas Tetlock advises us to find a synthesis in which the approaches cooperate.

3.

Eliezer provides a decent outline of what causes excess modesty. He classifies the two main failure modes as anxious underconfidence, and status regulation. Anxious underconfidence definitely sounds like something I’ve felt somewhat often, and status regulation seems pretty plausible, but harder for me to detect.

Eliezer presents a clear model of why status regulation exists, but his explanation for anxious underconfidence doesn’t seem complete. Here are some of my ideas about possible causes of anxious underconfidence:

  • People evaluate mistaken career choices and social rejection as if they meant death (which was roughly true until quite recently), so extreme risk aversion made sense;
  • Inaction (or choosing the default action) minimizes blame. If I carefully consider an option, my choice says more about my future actions than if I neglect to think about the option;
  • People often evaluate their success at life by counting the number of correct and incorrect decisions, rather than adding up the value produced;
  • People who don’t grok the Bayesian meaning of the word “evidence” are likely to privilege the scientific and legal meanings of evidence. So beliefs based on more subjective evidence get treated as second class citizens.

I suspect that most harm from excess modesty (and also arrogance) happens in evolutionarily novel contexts. Decisions such as creating a business plan for a startup, or writing a novel that sells a million copies, are sufficiently different from what we evolved to do that we should expect over/underconfidence to cause more harm.

4.

Another way to summarize the book would be: don’t aim to overcompensate for overconfidence; instead, aim to eliminate the causes of overconfidence.

This book will be moderately popular among Eliezer’s fans, but it seems unlikely to greatly expand his influence.

It didn’t convince me that epistemic modesty is generally harmful, but it does provide clues to identifying significant domains in which epistemic modesty causes important harm.

The paper When Will AI Exceed Human Performance? Evidence from AI Experts reports ML researchers expect AI will create a 5% chance of “Extremely bad (e.g. human extinction)” consequences, yet they’re quite divided over whether that implies it’s an important problem to work on.

Slate Star Codex expresses confusion about and/or disapproval of (a slightly different manifestation of) this apparent paradox. It’s a pretty clear sign that something is suboptimal.

Here are some conjectures (not designed to be at all mutually exclusive).
Continue Reading

A new paper titled When Will AI Exceed Human Performance? Evidence from AI Experts reports some bizarre results. From the abstract:

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

So we should expect a 75 year period in which machines can perform all tasks better and more cheaply than humans, but can’t automate all occupations. Huh?

I suppose there are occupations that consist mostly of having status rather than doing tasks (queen of England, or waiter at a classy restaurant that won’t automate service due to the high status of serving food the expensive way). Or occupations protected by law, such as gas station attendants who pump gas in New Jersey, decades after most drivers switched to pumping for themselves.

But I’d be rather surprised if machine learning researchers would think of those points when answering a survey in connection with a machine learning conference.

Maybe the actual wording of the survey questions caused a difference that got lost in the abstract? Hmmm …

“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers

versus

when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.

I tried to convince myself that the second version got interpreted as referring to actually replacing humans, while the first version referred to merely being qualified to replace humans. But the more I compared the two, the more that felt like wishful thinking. If anything, the “unaided” in the first version should make that version look farther in the future.

Can I find any other discrepancies between the abstract and the details? The 120 years in the abstract turns into 122 years in the body of the paper. So the authors seem to be downplaying the weirdness of the results.

There’s even a prediction of a 50% chance that the occupation “AI researcher” will be automated in about 88 years (I’m reading that from figure 2; I don’t see an explicit number for it). I suspect some respondents said this would take longer than for machines to “accomplish every task better and more cheaply”, but I don’t see data in the paper to confirm that [1].

A more likely hypothesis is that researchers alter their answers based on what they think people want to hear. Researchers might want to convince their funders that AI deals with problems that can be solved within the career of the researcher [2], while also wanting to reassure voters that AI won’t create massive unemployment until the current generation of workers has retired.

That would explain the general pattern of results, although the magnitude of the effect still seems strange. And it would imply that most machine learning researchers are liars, or have so little understanding of when HLMI will arrive that they don’t notice a 50% shift in their time estimates.

The ambiguity in terms such as “tasks” and “better” could conceivably explain confusion over the meaning of HLMI. I keep intending to write a blog post that would clarify concepts such as human-level AI and superintelligence, but then procrastinating because my thoughts on those topics are unclear.

It’s hard to avoid the conclusion that I should reduce my confidence in any prediction of when AI will reach human-level competence. My prior 90% confidence interval was something like 10 to 300 years. I guess I’ll broaden it to maybe 8 to 400 years [3].

P.S. – See also Katja’s comments on prior surveys.

[1] – the paper says most participants were asked the question that produced the estimate of 45 years to HLMI, the rest got the question that produced the 122 year estimate. So the median for all participants ought to be less than about 84 years, unless there are some unusual quirks in the data.

[2] – but then why do experienced researchers say human-level AI is farther in the future than new researchers, who presumably will be around longer? Maybe the new researchers are chasing fads or get-rich-quick schemes, and will mostly quit before becoming senior researchers?

[3] – years of subjective time as experienced by the fastest ems. So probably nowhere near 400 calendar years.

I’ve recently noticed some possibly important confusion about machine learning (ML)/deep learning. I’m quite uncertain how much harm the confusion will cause.

On MIRI’s Intelligent Agent Foundations Forum:

If you don’t do cognitive reductions, you will put your confusion in boxes and hide the actual problem. … E.g. if neural networks are used to predict math, then the confusion about how to do logical uncertainty is placed in the black box of “what this neural net learns to do”

On SlateStarCodex:

Imagine a future inmate asking why he was denied parole, and the answer being “nobody knows and it’s impossible to find out even in principle” … (DeepMind employs a Go master to help explain AlphaGo’s decisions back to its own programmers, which is probably a metaphor for something)

A possibly related confusion, from a conversation that I observed recently: philosophers have tried to understand how concepts work for centuries, but have made little progress; therefore deep learning isn’t very close to human-level AGI.

I’m unsure whether any of the claims I’m criticizing reflect actually mistaken beliefs, or whether they’re just communicated carelessly. I’m confident that at least some people at MIRI are wise enough to avoid this confusion [1]. I’ve omitted some ensuing clarifications from my description of the deep learning conversation – maybe if I remembered those sufficiently well, I’d see that I was reacting to a straw man of that discussion. But it seems likely that some people were misled by at least the SlateStarCodex comment.

There’s an important truth that people refer to when they say that neural nets (and machine learning techniques in general) are opaque. But that truth gets seriously obscured when rephrased as “black box” or “impossible to find out even in principle”.
Continue Reading

Book review: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich, Richard F. West and Maggie E. Toplak.

This book describes an important approach to measuring individual rationality: an RQ test that loosely resembles an IQ test. But it pays inadequate attention to the most important problems with tests of rationality.

Coachability

My biggest concern about rationality testing is what happens when people anticipate the test and are motivated to maximize their scores (as is the case with IQ tests). Do they:

  • learn to score high by “cheating” (i.e. learn what answers the test wants, without learning to apply that knowledge outside of the test)?
  • learn to score high by becoming more rational?
  • not change their score much, because they’re already motivated to do as well as their aptitudes allow (as is mostly the case with IQ tests)?

Alas, the book treats these issues as an afterthought. Their test knowingly uses questions for which cheating would be straightforward, such as asking whether the test subject believes in science, and whether they prefer to get $85 now rather than $100 in three months. (If they could use real money, that would drastically reduce my concerns about cheating. I’m almost tempted to advocate doing that, but doing so would hinder widespread adoption of the test, even if using real money added enough value to pay for itself.)

Continue Reading

I’ve substantially reduced my anxiety over the past 5-10 years.

Many of the important steps along that path look easy in hindsight, yet the overall goal looked sufficiently hard prospectively that I usually assumed it wasn’t possible. I only ended up making progress by focusing on related goals.

In this post, I’ll mainly focus on problems related to general social anxiety among introverted nerds. It will probably be much less useful to others.

In particular, I expect it doesn’t apply very well to ADHD-related problems, and I have little idea how well it applies to the results of specific PTSD-type trauma.

It should be slightly useful for anxiety over politicians who are making America grate again. But you’re probably fooling yourself if you blame many of your problems on distant strangers.

Trump: Make America Grate Again!

Continue Reading

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

One of most important assumptions in The Age of Ems is that non-em AGI will take a long time to develop.

1.

Scott Alexander at SlateStarCodex complains that Robin rejects survey data that uses validated techniques, and instead uses informal surveys whose results better fit Robin’s biases [1]. Robin clearly explains one reason why he does that: to get the outside view of experts.

Whose approach to avoiding bias is better?

  • Minimizing sampling error and carefully documenting one’s sampling technique are two of the most widely used criteria to distinguish science from wishful thinking.
  • Errors due to ignoring the outside view have been documented to be large, yet forecasters are reluctant to use the outside view.

So I rechecked advice from forecasting experts such as Philip Tetlock and Nate Silver, and the clear answer I got was … that was the wrong question.

Tetlock and Silver mostly focus on attitudes that are better captured by the advice to be a fox, not a hedgehog.

The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement.

Tetlock’s commandment number 3 says “Strike the right balance between inside and outside views”. Neither Tetlock or Silver offer hope that either more rigorous sampling of experts or dogmatically choosing the outside view over the inside view help us win a forecasting contest.

So instead of asking who is right, we should be glad to have two approaches to ponder, and should want more. (Robin only uses one approach for quantifying the time to non-em AGI, but is more fox-like when giving qualitative arguments against fast AGI progress).

2.

What Robin downplays is that there’s no consensus of the experts on whom he relies, not even about whether progress is steady, accelerating, or decelerating.

Robin uses the median expert estimate of progress in various AI subfields. This makes sense if AI progress depends on success in many subfields. It makes less sense if success in one subfield can make the other subfields obsolete. If “subfield” means a guess about what strategy best leads to intelligence, then I expect the median subfield to be rendered obsolete by a small number of good subfields [2]. If “subfield” refers to a subset of tasks that AI needs to solve (e.g. vision, or natural language processing), then it seems reasonable to look at the median (and I can imagine that slower subfields matter more). Robin appears to use both meanings of “subfield”, with fairly similar results for each, so it’s somewhat plausible that the median is informative.

3.

Scott also complains that Robin downplays the importance of research spending while citing only a paper dealing with government funding of agricultural research. But Robin also cites another paper (Ulku 2004), which covers total R&D expenditures in 30 countries (versus 16 countries in the paper that Scott cites) [3].

4.

Robin claims that AI progress will slow (relative to economic growth) due to slowing hardware progress and reduced dependence on innovation. Even if I accept Robin’s claims about these factors, I have trouble believing that AI progress will slow.

I expect higher em IQ will be one factor that speeds up AI progress. Garrett Jones suggests that a 40 IQ point increase in intelligence causes a 50% increase in a country’s productivity. I presume that AI researcher productivity is more sensitive to IQ than is, say, truck driver productivity. So it seems fairly plausible to imagine that increased em IQ will cause more than a factor of two increase in the rate of AI progress. (Robin downplays the effects of IQ in contexts where a factor of two wouldn’t much affect his analysis; he appears to ignore them in this context).

I expect that other advantages of ems will contribute additional speedups – maybe ems who work on AI will run relatively fast, maybe good training/testing data will be relatively cheap to create, or maybe knowledge from experimenting on ems will better guide AI research.

5.

Robin’s arguments against an intelligence explosion are weaker than they appear. I mostly agree with those arguments, but I want to discourage people from having strong confidence in them.

The most suspicious of those arguments is that gains in software algorithmic efficiency “remain surprisingly close to the rate at which hardware costs have fallen. This suggests that algorithmic gains have been enabled by hardware gains”. He cites only (Grace 2013) in support of this. That paper doesn’t comment on whether hardware changes enable software changes. The evidence seems equally consistent with that or with the hypothesis that both are independently caused by some underlying factor. I’d say there’s less than a 50% chance that Robin is correct about this claim.

Robin lists 14 other reasons for doubting there will be an intelligence explosion: two claims about AI history (no citations), eight claims about human intelligence (one citation), and four about what causes progress in research (with the two citations mentioned earlier). Most of those 14 claims are probably true, but it’s tricky to evaluate their relevance.

Conclusion

I’d say there’s maybe a 15% chance that Robin is basically right about the timing of non-em AI given his assumptions about ems. His book is still pretty valuable if an em-dominated world lasts for even one subjective decade before something stranger happens. And “something stranger happens” doesn’t necessarily mean his analysis becomes obsolete.

Footnotes

[1] – I can’t find any SlateStarCodex complaint about Bostrom doing something in Superintelligence that’s similar to what Scott accuses Robin of, when Bostrom’s survey of experts shows an expected time of decades for human-level AI to become superintelligent. Bostrom wants to focus on a much faster takeoff scenario, and disagrees with the experts, without identifying reasons for thinking his approach reduces biases.

[2] – One example is that genetic algorithms are looking fairly obsolete compared to neural nets, now that they’re being compared on bigger problems than when genetic algorithms were trendy.

Robin wants to avoid biases from recent AI fads by looking at subfields as they were defined 20 years ago. Some recent changes in AI are fads, but some are increased wisdom. I expect many subfields to be dead ends, given how immature AI was 20 years ago (and may still be today).

[3] – Scott quotes from one of three places that Robin mentions this subject (an example of redundancy that is quite rare in the book), and that’s the one place out of three where Robin neglects to cite (Ulku 2004). Age of Em is the kind of book where it’s easy to overlook something important like that if you don’t read it more carefully than you’d read a normal book.

I tried comparing (Ulku 2004) to the OECD paper that Scott cites, and failed to figure out whether they disagree. The OECD paper is probably consistent with Robin’s “less than proportionate increases” claim that Scott quotes. But Scott’s doubts are partly about Robin’s bolder prediction that AI progress will slow down, and academic papers don’t help much in evaluating that prediction.

If you’re tempted to evaluate how well the Ulku paper supports Robin’s views, beware that this quote is one of its easier to understand parts:

In addition, while our analysis lends support for endogenous growth theories in that it confirms a significant relationship between R&D stock and innovation, and between innovation and per capita GDP, it lacks the evidence for constant returns to innovation in terms of R&D stock. This implies that R&D models are not able to explain sustainable economic growth, i.e. they are not fully endogenous.

Book review: The Charisma Myth: How Anyone Can Master the Art and Science of Personal Magnetism, by Olivia Fox Cabane.

This book provides clear and well-organized instructions on how to become more charismatic.

It does not make the process sound easy. My experience with some of her suggestions (gratitude journalling and meditation) seems typical of her ideas – they took a good deal of attention, and probably caused gradual improvements in my life, but the effects were subtle enough to leave lots of uncertainty about how effective they were.

Many parts of the book talk as if more charisma is clearly better, but occasionally she talks about downsides such as being convincing even when you’re wrong. The chapter that distinguishes four types of charisma (focus, kindness, visionary, and authority) helped me clarify what I want and don’t want from charisma. Yet I still feel a good deal of conflict about how much charisma I want, due to doubts about whether I can separate the good from the bad. I’ve had some bad experiences in with feeling and sounding confident about investments in specific stocks has caused me to lose money by holding those stocks too long. I don’t think I can increase my visionary or authority charisma without repeating that kind of mistake unless I can somehow avoid talking about investments when I turn on those types of charisma.

I’ve been trying the exercises that are designed to boost self-compassion, but my doubts about the effort required for good charisma and about the desirability of being charismatic have limited the energy I’m willing to put into it.

Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.

This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.

He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.

He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.

Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.

I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.