Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy.

This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.

I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn’t strike me as someone with experience at writing useful code. His claim that “to increase their speed [AIs] will attempt to minimize the size of their source code” sounds like a misconception that wouldn’t occur to an experienced programmer. And his chapter “How to Prove You Invented Superintelligence So No One Else Can Steal It” seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn’t destroy the world.

I’m still puzzling over some of his novel suggestions for reducing AI risks. How would “convincing robots to worship humans as gods” differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?

Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities (abilities that Yampolskiy says are shown by peer-reviewed experiments associated with the Global Consciousness Project).

The style is also weird. Some chapters were previously published as separate papers, and weren’t adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.

The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).

Only read this if you’ve read other authors on this subject first.

Book review: Foragers, Farmers, and Fossil Fuels: How Human Values Evolve, by Ian Morris.

This book gives the impression that Morris had a halfway decent book in mind, but forgot to write down important parts of it.

He devotes large (possibly excessive) parts of the book to describing worldwide changes in what people value that correlate with the shifts to farming and then industry.

He convinces me that there’s some sort of connection between those values and how much energy per capita each society is able to use. He probably has a clue or two what that connection is, but the book failed to enlighten me about the connection.

He repeatedly claims that each age gets the thought that it needs. I find that about as reasonable as claiming that the widespread malnutrition associated with farming was what farming cultures needed. Indeed, his description of how farming caused gender inequality focuses on increased ability of men to inflict pain on women, and on increased incentives to do so. That sounds like a society made worse off, not getting what it needs.

He mentions (almost as an afterthought) some moderately interesting models of what caused specific changes in values as a result of the agricultural revolution.

He does an ok job of explaining the increased support for hierarchy in farming societies as an effect of the community size increasing past the Dunbar Number.

He attributes the reduced support for hierarchy in the industrial world to a need for interchangeable citizens. But he doesn’t document that increased need for interchangeability, and I’m skeptical that any such effect was strong. See The Institutional Revolution for a well thought out alternative model.

I had hoped to find some ideas about how to predict value changes that will result from the next big revolution. But I can’t figure out how to usefully apply his ideas to novel situations.

See also Robin Hanson’s review.

I use Beeminder occasionally. The site’s emails normally suffice to bug me into accomplishing whatever I’ve committed to doing. But I only use it for a few tasks for which my motivation is marginal. Most of the times that I consider using Beeminder, I either figure out how to motivate myself properly, or (more often) decide that my goal isn’t important.

The real value of Beeminder is that if I want to compel future-me to do something, I can’t give up by using the excuse that future-me is lazy or unreliable. Instead, I find myself wondering why I’m unwilling to risk $X to make myself likely to complete the task. That typically causes me to notice legitimate doubts about how highly I value the result.

Book review: The Sense of Structure: Writing from the Reader’s Perspective, by George D. Gopen.

The most important goal of this book is to teach writers how to analyze and influence which words in a sentence (or which sentences in a paragraph) readers will treat as most important.

Most of the advice is specific to writing. The confusion with which the book helps becomes much less important for spoken words that come with tone (to show emphasis) and pauses.

A secondary goal of the book is to explain how to organize sentences to minimize the reader’s need to hold information in working memory. For example, putting lots of words before the main subject and verb as this sentence does (unless you really want to slow the reader down, such as when telling someone they’re fired) is something he teaches us to avoid.

I found the explanations fairly clear and moderately surprising. Learning from them depends very heavily on repeated practice at rearranging words within sentences and evaluating how the changes affect readers’ reactions.

That practice feels like it requires lots of willpower. With decisions in some other contexts (e.g. what to eat or where to hike) I can comfortably hold several options in my short-term memory. But when I translate vague thoughts into words, I feel strongly anchored to whatever version I come up with first. And I often find it hard to decide what parts of a sentence I want to emphasize. But I’ve grown sufficiently dissatisfied with my writing style that I plan to pay enough attention while writing that I’ll learn to improve on my initial version.

Please give me feedback in a few months about whether my writing has become easier to read.

My alternate day calorie restriction diet is going well. My body and/or habits are adapting. But the visible benefits are still small.

  • I normally do three restricted days per week (very rarely only two). I eat 800-1000 calories on those days (or 1200-1400 when I burn more than 1000 calories by hiking). On unrestricted days, I try to eat a little more than feels natural.
  • I have an improved ability to bring my weight to a particular target, but the range of weights that feel good is much narrower than I expected. My weight has stabilized to a range of 142-145 pounds, compared to 145-148 last year and an erratic 138-148 in the first few weeks of my new diet. If I reduce my weight below 142, I feel irritable in the afternoon or evening of a restricted day. At 145, I’m on the verge of that too-full feeling that was common in prior years.
  • My resting heart rate has declined from about 70 to about 65.
  • For many years I’ve been waking in the middle of the night feeling too warm, with little apparent pattern. A byproduct of my new diet is that I’ve noticed it’s connected to having eaten protein.
  • I’m using less willpower now than in prior years to eat the right amount. My understanding of the willpower effect is influenced by CFAR’s attitude, which is that occasionally using willpower to fight the goals of one of my mind’s sub-agents is reasonable, but the longer I continue it, the more power and thought that sub-agent will devote to accomplishing its goals. My sub-agent in charge of getting me to eat lots to prepare for a famine can now rely on me, if I’m resisting it today, to encourage it tomorrow; whereas in prior years I was continually pressuring it to do less than it wanted. That makes it more cooperative.

The only drawbacks are the increased attention I need to pay to what I eat on restricted days, and the difficulties of eating out on restricted days (due to my need to control portion sizes and to time my main meals near the middle of the day). I find it fairly easy to schedule my restricted days so that I’m almost always eating at home, but I expect many people to find that hard.

I recently went to Aletheia, a workshop that helps people experience the creation of good interpersonal connections.

An important technique is to get people to focus on what is going on in their minds (especially emotions), and devote less attention to external objects/events. Beyond that they provided little explanation of how it works. But I see enough similarities to the advice on Charismatips.com that at an intellectual level the ideas behind it don’t seem very new.

My initial reaction was that the workshop had few ideas that seemed new to me, and wasn’t likely to influence me much. But by the middle of the workshop I felt myself being somewhat drawn toward the others in the group. I got the impression that many participants experienced more change than I did. I suspect the leaders were exercising more skill than I was able to observe directly.

I think I’ve noticed some subtle changes in how I interact with people that might be due to Aletheia, but whatever benefits I got are hard to evaluate.

Mission: Heirloom is a cafe in Berkeley which is serious about paleo food. Their meat and eggs meet my ethical standards (no factory farmed animal products).

Their burgers are unusually tasty, and have a better texture than any other grass-fed beef I’ve tried.

They have a nice outdoor area in back – I recommend going there when it is warm outside.

The main drawback is limited variety in their entrees.

I wish there were more cafes like this.

Book review: The Measure of Civilization: How Social Development Decides the Fate of Nations, by Ian Morris.

The ambitious attempt to quantify the sophistication of societies is a partial success.

His goal is to compare the development of the two leading centers of human progress over the past 16000 years (western Eurasia and eastern Asia).

I read this book before looking at summaries of his previous book. The Measure of Civilization was designed to provide support for the claims in the prior book, but was objective enough that I didn’t infer from it what the main message of the prior book was.

When I focus on the numbers in this book and ignore other ideas I’ve read, the most plausible hypothesis I see is that the east followed a more risk-averse strategy than the west. The west suffered at least one crash (200-700 CE) that was a good deal worse than anything the east is known to have experienced.

He tries to measure four different quantities and aggregate them into an index. But the simplest way to scale them leaves two (information use and military power) insignificant until about 1900, then rising at a rate which seems likely to make them the only factors that matter to the index fairly soon. He briefly looks at some better ways to aggregate them, but they still seem inadequate.

In sum, the basic idea behind measuring those four quantities seems sound. If he wasn’t any more arbitrary about it than I suspect, then the book has been somewhat helpful at clarifying the trends over time of the leading human cultures, and maybe added a tiny bit of insight into the differences between east and west.

Book review: Masters of the Word: How Media Shaped History, by William J. Bernstein.

This is a history of the world which sometimes focuses on how technology changed communication, and how those changes affected society.

Instead of carefully documenting a few good ideas, he wanders over a wide variety of topics (including too many descriptions of battles and of individual people).

His claims seem mostly correct, but he often failed to convince me that he has good reason for believing them. E.g. when trying to explain why the Soviet economy was inefficient (haven’t enough books explained that already?) he says the “absence of a meaningful price signal proved especially damaging in the labor market”, but supports that by mentioning peculiarities which aren’t clear signs of damage, then describing some blatant waste that wasn’t clearly connected to labor market problems (and without numbers, doesn’t tell us the magnitude of the problems).

I would have preferred that he devote more effort to evaluating the importance of changes in communication to the downfall of the Soviet Union. He documents increased ability of Soviet citizens to get news from sources that their government didn’t control at roughly the time Soviet power weakened. But it’s not obvious how that drove political change. It seems to me that there was an important decrease in the ruthlessness of Soviet rulers that isn’t well explained by communication changes.

I liked his description of affordable printing presses depended on a number of technological advance, suggesting that printing could not easily have arisen at other times or places.

The claim I found most interesting was that the switch from reading aloud to reading silently and the related ability to write alone (as opposed to needed a speaker and a scribe) made it easier to spread seditious and sexual writings due to increased privacy.

Bernstein is optimistic that improved communication technology will have good political effects in the future. I partly agree, but I see more risks than he does (e.g. his like of the democratic features of the Arab Spring aren’t balanced by much concern over the risks of revolutionary violence).