TL;DR: Anthropic has made important progress at setting good goals for AIs. More work is still needed.

Anthropic has introduced a constitution that has a modest chance of becoming as important as the US constitution (Summary and discussion here).

It’s a large improvement over how AI companies were training ethics into AIs a few years ago. It feels like Anthropic has switched from treating Claude like a child to treating it as an adult.

The constitution looks good for AIs of 2026, so I will focus here on longer-term concerns.

Continue Reading

My response to the recent ICE killings has been to donate $7000 to the campaign of Senator Bill Cassidy.

Cassidy is a Republican who has called for a “full joint federal and state investigation” into the latest shooting. He also voted to convict Trump in the second impeachment trial. He faces strong opposition from a Trump-endorsed opponent in the primary.

The rule of law depends rather heavily on some Republicans standing up against Trump. Supporting Cassidy seems like the clearest way to encourage that.

I’m analyzing what happens to the US economy in the short-term aftermath of the typical job being replaced by AIs and robots. Will there be a financial crisis? Short answer: yes.

This is partly inspired by my dissatisfaction with Tomas Pueyo’s analysis in If I Were King, How Would I Prepare for AI?.

Let’s say 50% of workers lose their jobs at the same time (around 2030), and they’re expected to be permanently unemployed. (I know this isn’t fully realistic. I’m starting with simple models and will add more realism later.)

Continue Reading

Who benefits if the US develops artificial superintelligence (ASI) faster than China?

One possible answer is that AI kills us all regardless of which country develops it first. People who base their policy on that concern already agree with the conclusions of this post, so I won’t focus on that concern here.

This post aims to convince other people, especially people who focus on democracy versus authoritarianism, to be less concerned about which country develops ASI first. I will assume that AIs will be fully aligned with at least one human, and that the effects of AI will be roughly as important as the industrial revolution, or a bit more important.

Continue Reading

Book review: Red Heart, by Max Harms.

Red Heart resembles in important ways some of the early James Bond movies, but it’s more intellectually sophisticated than that.

It’s both more interesting and more realistic than Crystal Society (the only prior book of Harms’ that I’ve read). It pays careful attention to issues involving AI that are likely to affect the world soon, but mostly prioritizes a good story over serious analysis.

I was expecting to think of Red Heart as science fiction. It turned out to be borderline between science fiction and historical fiction. It’s set in an alternate timeline, but with only small changes from what the world looks like in 2025. The publicly available AIs are probably almost the same as what we’re using today. So it’s hard to tell whether there’s anything meaningfully fictional about this world.

Continue Reading

Food Tidbits 2

Here are some updates to the post that I made 6 years ago about foods that I like.

Erythritol may have contributed to my SIBO. I’ve replaced my erythritol consumption with monk fruit, stevia, and allulose.

My new favorite way to buy saskatoon berries is from Northwest Wild Foods.

For prepared meals, I often order from The Good Kitchen, and occasionally from Paleo On The Go. Except for special occasions, the only restaurant that I order from is Kitava.

Additions to my snack diet:

Continue Reading

This is a continuation of my review of IABIED. It’s intended for audiences who already know a lot about AI risk debates. Please at least glance at my main layman-oriented review before reading this.

Eliezer and Nate used to argue about AI risk using a paradigm that involved a pretty sudden foom, and which viewed values through a utility function lens. I’ll call that the MIRI paradigm (note: I don’t have a comprehensive description of the paradigm). In IABIED, they’ve tried to adopt a much broader paradigm that’s somewhat closer to that of more mainstream AI researchers. Yet they keep sounding to me like they’re still thinking within the MIRI paradigm.

Continue Reading