Artificial Intelligence

This post is mostly a response to the Foresight Institute’s book Gaming the Future, which is very optimistic about AI’s being cooperative. They expect that creating a variety of different AI’s will enable us to replicate the checks and balances that the US constitution created.

I’m also responding in part to Eliezer’s AGI lethalities, points 34 and 35, which say that we can’t survive the creation of powerful AGI’s simply by ensuring the existence of many co-equal AGI’s with different goals. One of his concerns is that those AGI’s will cooperate with each other enough to function as a unitary AGI. Interactions between AGI’s might fit the ideal of voluntary cooperation with checks and balances, yet when interacting with humans those AGI’s might function as an unchecked government that has little need for humans.

I expect reality to be somewhere in between those two extremes. I can’t tell which of those views is closer to reality. This is a fairly scary uncertainty.

Continue Reading

[Epistemic status: mostly writing to clarify my intuitions, with just a few weak attempts to convince others. It’s no substitute for reading Drexler’s writings.]

I’ve been struggling to write more posts relating to Drexler’s vision for AI (hopefully to be published soon), and in the process got increasingly bothered by the issue of whether AI researchers will see incentives to give AI’s broad goals that turn them into agents.

Drexler’s CAIS paper convinced me that our current trajectory is somewhat close to a scenario where human-level AI’s that are tool-like services are available well before AGI’s with broader goals.

Yet when I read LessWrong, I sympathize with beliefs that developers will want quite agenty AGI’s around the same time that CAIS-like services reach human levels.

I’m fed up with this epistemic learned helplessness, and this post is my attempt to reconcile those competing intuitions.

Continue Reading

I’ve been pondering whether we’ll get any further warnings about when AI(s) will exceed human levels at general-purpose tasks, and that doing so would entail enough risk that AI researchers ought to take some precautions. I feel pretty uncertain about this.

I haven’t even been able to make useful progress at clarifying what I mean by that threshold of general intelligence.

As a weak substitute, I’ve brainstormed a bunch of scenarios describing not-obviously-wrong ways in which people might notice, or fail to notice, that AI is transforming the world.

I’ve given probabilities for each scenario, which I’ve pulled out of my ass and don’t plan to defend.

Continue Reading

Book review: The Alignment Problem: Machine Learning and Human Values, by Brian Christian.

I was initially skeptical of Christian’s focus on problems with AI as it exists today. Most writers with this focus miss the scale of catastrophe that could result from AIs that are smart enough to subjugate us.

Christian mostly writes about problems that are visible in existing AIs. Yet he organizes his discussion of near-term risks in ways that don’t pander to near-sighted concerns, and which nudge readers in the direction of wondering whether today’s mistakes represent the tip of an iceberg.

Most of the book carefully avoids alarmist or emotional tones. It’s hard to tell whether he has an opinion on how serious a threat unaligned AI will be – presumably it’s serious enough to write a book about?

Could the threat be more serious than that implies? Christian notes, without indicating his own opinion, that some people think so:

A growing chorus within the AI community … believes, if we are not sufficiently careful, the this is literally how the world will end. And – for today at least – the humans have lost the game.

Continue Reading

Book review: The Precipice, by Toby Ord.

No, this isn’t about elections. This is about risks of much bigger disasters. It includes the risks of pandemics, but not the kind that are as survivable as COVID-19.

The ideas in this book have mostly been covered before, e.g. in Global Catastrophic Risks (Bostrom and Cirkovic, editors). Ord packages the ideas in a more organized and readable form than prior discussions.

See the Slate Star Codex review of The Precipice for an eloquent summary of the book’s main ideas.

Continue Reading

Book review: Human Compatible, by Stuart Russell.

Human Compatible provides an analysis of the long-term risks from artificial intelligence, by someone with a good deal more of the relevant prestige than any prior author on this subject.

What should I make of Russell? I skimmed his best-known book, Artificial Intelligence: A Modern Approach, and got the impression that it taught a bunch of ideas that were popular among academics, but which weren’t the focus of the people who were getting interesting AI results. So I guessed that people would be better off reading Deep Learning by Goodfellow, Bengio, and Courville instead. Human Compatible neither confirms nor dispels the impression that Russell is a bit too academic.

However, I now see that he was one of the pioneers of inverse reinforcement learning, which looks like a fairly significant advance that will likely become important someday (if it hasn’t already). So I’m inclined to treat him as a moderately good authority on AI.

The first half of the book is a somewhat historical view of AI, intended for readers who don’t know much about AI. It’s ok.

Continue Reading

Robin Hanson has been suggesting recently that we’ve been experiencing an AI boom that’s not too different from prior booms.

At the recent Foresight Vision Weekend, he predicted [not exactly – see the comments] a 20% decline in the number of Deepmind employees over the next year (Foresight asked all speakers to make a 1-year prediction).

I want to partly agree and partly disagree.

Continue Reading

Book review: The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World, by Tom Chivers.

This book is a sympathetic portrayal of the rationalist movement by a quasi-outsider. It includes a well-organized explanation of why some people expect tha AI will create large risks sometime this century, written in simple language that is suitable for a broad audience.

Caveat: I know many of the people who are described in the book. I’ve had some sort of connection with the rationalist movement since before it became distinct from transhumanism, and I’ve been mostly an insider since 2012. I read this book mainly because I was interested in how the rationalist movement looks to outsiders.

Chivers is a science writer. I normally avoid books by science writers, due to an impression that they mostly focus on telling interesting stories, without developing a deep understanding of the topics they write about.

Chivers’ understanding of the rationalist movement doesn’t quite qualify as deep, but he was surprisingly careful to read a lot about the subject, and to write only things he did understand.

Many times I reacted to something he wrote with “that’s close, but not quite right”. Usually when I reacted that way, Chivers did a good job of describing the the rationalist message in question, and the main problem was either that rationalists haven’t figured out how to explain their ideas in a way that a board audience can understand, or that rationalists are confused. So the complaints I make in the rest of this review are at most weakly directed in Chivers direction.

I saw two areas where Chivers overlooked something important.

Rationality

One involves CFAR.

Chivers wrote seven chapters on biases, and how rationalists view them, ending with “the most important bias”: knowing about biases can make you more biased. (italics his).

I get the impression that Chivers is sweeping this problem under the rug (Do we fight that bias by being aware of it? Didn’t we just read that that doesn’t work?). That is roughly what happened with many people who learned rationalism solely via written descriptions.

Then much later, when describing how he handled his conflicting attitudes toward the risks from AI, he gives a really great description of maybe 3% of what CFAR teaches (internal double crux), much like a blind man giving a really clear description of the upper half of an elephant’s trunk. He prefaces this narrative with the apt warning: “I am aware that this all sounds a bit mystical and self-helpy. It’s not.”

Chivers doesn’t seem to connect this exercise with the goal of overcoming biases. Maybe he was too busy applying the technique on an important problem to notice the connection with his prior discussions of Bayes, biases, and sanity. It would be reasonable for him to argue that CFAR’s ideas have diverged enough to belong in a separate category, but he seems to put them in a different category by accident, without realizing that many of us consider CFAR to be an important continuation of rationalists’ interest in biases.

World conquest

Chivers comes very close to covering all of the layman-accessible claims that Yudkowsky and Bostrom make. My one complaint here is that he only give vague hints about why one bad AI can’t be stopped by other AI’s.

A key claim of many leading rationalists is that AI will have some winner take all dynamics that will lead to one AI having a decisive strategic advantage after it crosses some key threshold, such as human-level intelligence.

This is a controversial position that is somewhat connected to foom (fast takeoff), but which might be correct even without foom.

Utility functions

“If I stop caring about chess, that won’t help me win any chess games, now will it?” – That chapter title provides a good explanation of why a simple AI would continue caring about its most fundamental goals.

Is that also true of an AI with more complex, human-like goals? Chivers is partly successful at explaining how to apply the concept of a utility function to a human-like intelligence. Rationalists (or at least those who actively research AI safety) have a clear meaning here, at least as applied to agents that can be modeled mathematically. But when laymen try to apply that to humans, confusion abounds, due to the ease of conflating subgoals with ultimate goals.

Chivers tries to clarify, using the story of Odysseus and the Sirens, and claims that the Sirens would rewrite Odysseus’ utility function. I’m not sure how we can verify that the Sirens work that way, or whether they would merely persuade Odysseus to make false predictions about his expected utility. Chivers at least states clearly that the Sirens try to prevent Odysseus (by making him run aground) from doing what his pre-Siren utility function advises. Chivers’ point could be a bit clearer if he specified that in his (nonstandard?) version of the story, the Sirens make Odysseus want to run aground.

Philosophy

“Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing utilitarians.” – That’s a bit misleading. Leading rationalists are predominantly consequentialists, but mostly avoid committing to a moral system as specific as utilitarianism. Leading rationalists also mostly endorse moral uncertainty. Rationalists mostly endorse utilitarian-style calculation (which entails some of the controversial features of utilitarianism), but are careful to combine that with worry about whether we’re optimizing the quantity that we want to optimize.

I also recommend Utilitarianism and its discontents as an example of one rationalist’s nuanced partial endorsement of utilitarianism.

Political solutions to AI risk?

Chivers describes Holden Karnofsky as wanting “to get governments and tech companies to sign treaties saying they’ll submit any AGI designs to outside scrutiny before switching them on. It wouldn’t be iron-clad, because firms might simply lie”.

Most rationalists seem pessimistic about treaties such as this.

Lying is hardly the only problem. This idea assumes that there will be a tiny number of attempts, each with a very small number of launches that look like the real thing, as happened with the first moon landing and the first atomic bomb. Yet the history of software development suggests it will be something more like hundreds of attempts that look like they might succeed. I wouldn’t be surprised if there are millions of times when an AI is turned on, and the developer has some hope that this time it will grow into a human-level AGI. There’s no way that a large number of designs will get sufficient outside scrutiny to be of much use.

And if a developer is trying new versions of their system once a day (e.g. making small changes to a number that controls, say, openness to new experience), any requirement to submit all new versions for outside scrutiny would cause large delays, creating large incentives to subvert the requirement.

So any realistic treaty would need provisions that identify a relatively small set of design choices that need to be scrutinized.

I see few signs that any experts are close to developing a consensus about what criteria would be appropriate here, and I expect that doing so would require a significant fraction of the total wisdom needed for AI safety. I discussed my hope for one such criterion in my review of Drexler’s Reframing Superintelligence paper.

Rationalist personalities

Chivers mentions several plausible explanations for what he labels the “semi-death of LessWrong”, the most obvious being that Eliezer Yudkowsky finished most of the blogging that he had wanted to do there. But I’m puzzled by one explanation that Chivers reports: “the attitude … of thinking they can rebuild everything”. Quoting Robin Hanson:

At Xanadu they had to do everything different: they had to organize their meetings differently and orient their screens differently and hire a different kind of manager, everything had to be different because they were creative types and full of themselves. And that’s the kind of people who started the Rationalists.

That seems like a partly apt explanation for the demise of the rationalist startups MetaMed and Arbital. But LessWrong mostly copied existing sites, such as Reddit, and was only ambitious in the sense that Eliezer was ambitious about what ideas to communicate.

Culture

I guess a book about rationalists can’t resist mentioning polyamory. “For instance, for a lot of people it would be difficult not to be jealous.” Yes, when I lived in a mostly monogamous culture, jealousy seemed pretty standard. That attititude melted away when the bay area cultures that I associated with started adopting polyamory or something similar (shortly before the rationalists became a culture). Jealousy has much more purpose if my partner is flirting with monogamous people than if he’s flirting with polyamorists.

Less dramatically, We all know people who are afraid of visiting their city centres because of terrorist attacks, but don’t think twice about driving to work.

This suggests some weird filter bubbles somewhere. I thought that fear of cities got forgotten within a month or so after 9/11. Is this a difference between London and the US? Am I out of touch with popular concerns? Does Chivers associate more with paranoid people than I do? I don’t see any obvious answer.

Conclusion

It would be really nice if Chivers and Yudkowsky could team up to write a book, but this book is a close substitute for such a collaboration.

See also Scott Aaronson’s review.

Book review: Prediction Machines: The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.

Three economists decided to write about AI. They got excited about AI, and that distracted them enough that they only said a modest amount about the standard economics principles that laymen need to better understand. As a result, the book ended up mostly being simple descriptions of topics on which the authors had limited expertise. I noticed fewer amateurish mistakes than I expected from this strategy, and they mostly end up doing a good job of describing AI in ways that are mildly helpful to laymen who only want a very high-level view.

The book’s main goal is to advise business on how to adopt current types of AI (“reading this book is almost surely an excellent predictor of being a manager who will use prediction machines”), with a secondary focus on how jobs will be affected by AI.

The authors correctly conclude that a modest extrapolation of current trends implies at most some short-term increases in unemployment.

Continue Reading

Eric Drexler has published a book-length paper on AI risk, describing an approach that he calls Comprehensive AI Services (CAIS).

His primary goal seems to be reframing AI risk discussions to use a rather different paradigm than the one that Nick Bostrom and Eliezer Yudkowsky have been promoting. (There isn’t yet any paradigm that’s widely accepted, so this isn’t a Kuhnian paradigm shift; it’s better characterized as an amorphous field that is struggling to establish its first paradigm). Dueling paradigms seems to be the best that the AI safety field can manage to achieve for now.

I’ll start by mentioning some important claims that Drexler doesn’t dispute:

  • an intelligence explosion might happen somewhat suddenly, in the fairly near future;
  • it’s hard to reliably align an AI’s values with human values;
  • recursive self-improvement, as imagined by Bostrom / Yudkowsky, would pose significant dangers.

Drexler likely disagrees about some of the claims made by Bostrom / Yudkowsky on those points, but he shares enough of their concerns about them that those disagreements don’t explain why Drexler approaches AI safety differently. (Drexler is more cautious than most writers about making any predictions concerning these three claims).

CAIS isn’t a full solution to AI risks. Instead, it’s better thought of as an attempt to reduce the risk of world conquest by the first AGI that reaches some threshold, preserve existing corrigibility somewhat past human-level AI, and postpone need for a permanent solution until we have more intelligence.

Continue Reading