Book review: Deep Utopia: Life and Meaning in a Solved World, by Nick Bostrom.

Bostrom’s previous book, Superintelligence, triggered expressions of concern. In his latest work, he describes his hopes for the distant future, presumably to limit the risk that fear of AI will lead to a The Butlerian Jihad-like scenario.

While Bostrom is relatively cautious about endorsing specific features of a utopia, he clearly expresses his dissatisfaction with the current state of the world. For instance, in a footnoted rant about preserving nature, he writes:

Imagine that some technologically advanced civilization arrived on Earth … Imagine they said: “The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads … What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony.” … this would be appallingly callous.

The book begins as if addressing a broad audience, then drifts into philosophy that seems obscure, leading me to wonder if it’s intended as a parody of aimless academic philosophy.

Continue Reading

I’ve been dedicating a fair amount of my time recently to investigating whole brain emulation (WBE).

As computational power continues to grow, the feasibility of emulating a human brain at a reasonable speed becomes increasingly plausible.

While the connectome data alone seems insufficient to fully capture and replicate human behavior, recent advancements in scanning technology have provided valuable insights into distinguishing different types of neural connections. I’ve heard suggestions that combining this neuron-scale data with higher-level information, such as fMRI or EEG, might hold the key to unlocking WBE. However, the evidence is not yet conclusive enough for me to make any definitive statements.

I’ve heard some talk about a new company aiming to achieve WBE within the next five years. While this timeline aligns suspiciously with the typical venture capital horizon for industries with weak patent protection, I believe there is a non-negligible chance of success within the next decade – perhaps exceeding 10%. As a result, I’m actively exploring investment opportunities in this company.

There has also been speculation about the potential of WBE to aid in AI alignment efforts. However, I remain skeptical about this prospect. For WBE to make a significant impact on AI alignment, it would require not only an acceleration in WBE progress but also a slowdown in AI capability advances as they approach human levels or the assumption that the primary risks from AI emerge only when it substantially surpasses human intelligence.

My primary motivation for delving into WBE stems from a personal desire to upload my own mind. The potential benefits of WBE for those who choose not to upload remain uncertain, and I’m uncertain how to predict its broader societal implications.

Here are some videos that influenced my recent increased interest. Note that I’m relying heavily on the reputations of the speakers when deciding how much weight to give to their opinions.

Some relevant prediction markets:

Additionally, I’ve been working on some of the suggestions mentioned in the first video. I’m sharing my code and analysis on Colab. My aim is to evaluate the resilience of language models to the types of errors that might occur during the brain scanning process. While the results provide some reassurance, their value heavily relies on assumptions about the importance of low-confidence guesses made by the emulated mind.

Manifold Markets is a prediction market platform where I’ve been trading since September. This post will compare it to other prediction markets that I’ve used.

Play Money

The most important fact about Manifold is that traders bet mana, which is for most purposes not real money. You can buy mana, and use mana to donate real money to charity. That’s not attractive enough for most of us to treat it as anything other than play money.

Play money has the important advantage of not being subject to CFTC regulation or gambling laws. That enables a good deal of innovation that is stifled in real-money platforms that are open to US residents.

Continue Reading

Book review: A Theory of Everyone – The New Science of Who We Are, How We Got Here, and Where We’re Going Energy, culture and a better future for everyone, by Michael Muthukrishna.

I found this book disappointing. An important part of that is because Muthukrishna set my expectations too high.

I had previously blogged about a paper that he co-authored with Henrich on cultural influences on IQ. If those ideas were new in the book, I’d be eagerly writing about them. But I’ve already written enough about those ideas in that blog post.

Another source of disappointment was that the book’s title is misleading. To the limited extent that the book focuses on a theory, it’s the theory that’s more clearly described in Henrich’s The Secret of our Success. A Theory of Everyone feels more like a collection of blog posts than like a well-organized book.

Continue Reading

Book review: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney.

Dark Skies is an unusually good and bad book.

Good in the sense that 95% of the book consists of uncontroversial, scholarly, mundane claims that accurately describe the views that Deudney is attacking. These parts of the book are careful to distinguish between value differences and claims about objective facts.

Bad in the senses that the good parts make the occasional unfair insult more gratuitous, and that Deudney provides little support for his predictions that his policies will produce better results than those of his adversaries. I count myself as one of his adversaries.

Dark Skies is an opposite of Where Is My Flying Car? in both style and substance.

Continue Reading

Book review: The Accidental Superpower: The Next Generation of American Preeminence and the Coming Global Disorder, by Peter Zeihan.

Are you looking for an entertaining set of geopolitical forecasts that will nudge you out of the frameworks of mainstream pundits? This might be just the right book for you.

Zeihan often sounds more like a real estate salesman than a scholar: The US has more miles of internal waterways than the rest of the world combined! US mountain ranges have passes that are easy enough to use that the mountains barely impede traffic. Transportation options like that guarantee sufficient political unity!

Continue Reading

[I mostly wrote this to clarify my thoughts. I’m unclear whether this will be valuable for readers. ]

I expect that within a decade, AI will be able to do 90% of current human jobs. I don’t mean that 90% of humans will be obsolete. I mean that the average worker could delegate 90% of their tasks to an AGI.

I feel confused about what this implies for the kind of AI long-term planning and strategizing that would enable an AI to create large-scale harm if it is poorly aligned.

Is the ability to achieve long-term goals hard for an AI to develop?

Continue Reading

Disagreements related to what we value seem to explain maybe 10% of the disagreements over AI safety. This post will try to explain how I think about which values I care about perpetuating to the distant future.

Robin Hanson helped to clarify the choices in Which Of Your Origins Are You?:

The key hard question here is this: what aspects of the causal influences that lead to you do you now embrace, and which do you instead reject as “random” errors that you want to cut out? Consider two extremes.
At one extreme, one could endorse absolutely every random element that contributed to any prior choice or intuition.

At the other extreme, you might see yourself as primarily the result of natural selection, both of genes and of memes, and see your core non-random value as that of doing the best you can to continue to “win” at that game. … In this view, everything about you that won’t help your descendants be selected in the long run is a random error that you want to detect and reject.

In other words, the more unique criteria we have about what we want to preserve into the distant future, the less we should expect to succeed.

Continue Reading