All posts tagged consciousness

Book review: Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, by Peter Godfrey-Smith.

This book describes some interesting mysteries, but provides little help at solving them.

It provides some pieces of a long-term perspective on the evolution of intelligence.

Cephalopods’ most recent common ancestor with vertebrates lived way back before the Cambrian explosion. Nervous systems back then were primitive enough that minds didn’t need to react to other minds, and predation was a rare accident, not something animals prepared carefully to cause and avoid.

So cephalopod intelligence evolved rather independently from most of the minds we observe. We could learn something about alien minds by understanding them.

Intelligence may even have evolved more than once in cephalopods – nobody seems to know whether octopuses evolved intelligence separately from squids/cuttlefish.

An octopus has a much less centralized mind than vertebrates do. Does an octopus have a concept of self? The book presents evidence that octopuses sometimes seem to think of their arms as parts of their self, yet hints that their concept of self is a good deal weaker than in humans, and maybe the octopus treats its arms as semi-autonomous entities.


Does an octopus have color vision? Not via its photoreceptors the way many vertebrates do. Simple tests of octopuses’ ability to discriminate color also say no.

Yet octopuses clearly change color to camouflage themselves. They also change color in ways that suggest they’re communicating via a visual language. But to whom?

One speculative guess is that the color-producing parts act as color filters, with monochrome photoreceptors in the skin evaluating the color of the incoming light by how much the light is attenuated by the filters. So they “see” color with their skin, but not their eyes.

That would still leave plenty of mystery about what they’re communicating.


The author’s understanding of aging implies that few organisms die of aging in the wild. He sees evidence in Octopuses that conflicts with this prediction, yet that doesn’t alert him to the growing evidence of problems with the standard theories of aging.

He says octopuses are subject to much predation. Why doesn’t this cause them to be scared of humans? He has surprising anecdotes of octopuses treating humans as friends, e.g. grabbing one and leading him on a ten-minute “tour”.

He mentions possible REM sleep in cuttlefish. That would almost certainly have evolved independently from vertebrate REM sleep, which must indicate something important.

I found the book moderately entertaining, but I was underwhelmed by the author’s expertise. The subtitle’s reference to “the Deep Origins of Consciousness” led me to expect more than I got.

Book review: Self Comes to Mind: Constructing the Conscious Brain by Antonio R. Damasio.

This book describes many aspects of human minds in ways that aren’t wrong, but the parts that seem novel don’t have important implications.

He devotes a sizable part of the book to describing how memory works, but I don’t understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that “it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self”. There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio’s notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.

Book review: The Ego Tunnel: The Science of the Mind and the Myth of the Self, by Thomas Metzinger.

This book describes aspects of consciousness in ways that are often, but not consistently, clear and informative. His ideas are not revolutionary, but will clarify our understanding.

I didn’t find his tunnel metaphor very helpful.

I like his claim that “conscious information is exactly that information that must be made available for every single one of your cognitive capacities at the same time”. That may be an exaggeration, but it describes an important function of consciousness.

He makes surprisingly clear and convincing arguments that there are degrees of consciousness, so that some other species probably have some but not all of what we think of as human consciousness. He gives interesting examples of ways that humans can be partially conscious, e.g. people with Cotard’s Syndrome can deny their own existence.

His discussion of ethical implications of neuroscience points out some important issues to consider, but I’m unimpressed with his conclusion that we shouldn’t create conscious machines. He relies on something resembling the Precautionary Principle that says we should never risk causing suffering in an artificial entity. As far as I can tell, the same reasoning would imply that having children is unethical because they might suffer.

Book review: Going Inside: A Tour Round a Single Moment of Consciousness, by John McCrone.

This book improved my understanding of how various parts of the brain interact, and of how long it takes the brain to process and react to sensory data. But there were many times when I wondered whether it was worth finishing, and I wish I had given up before the last few chapters that focused on consciousness other than neuroscience.

Too much of the book is devoted to attacking naive versions of reductionism and computational models of the brain. His claim that “chaos theory electrified science” is wrong. It electrified some reports about science, but has done little to create better models or testable predictions.

It’s misleading for him to claim the difference between human and animal consciousness “is terribly simple. Animals are locked into the present tense.” There are many hints that animals have some thoughts about the future and past, and it’s hard enough to evaluate those thoughts that we need to be cautious about denying that they think like us. He suggests that language and grammar provide unique abilities to think about the future. But I’m fairly sure I can analyze the future without using language, using mostly visual processing to plan a route I’m going to kayak through some rapids, or to imagine an opponent’s next chess move. I expect animals have some abilities along those lines. Human language must provide some improved ability to think about the future, but I find it hard to specify those abilities.

Book review: Counting Sheep: The Science and Pleasures of Sleep and Dreams by Paul Martin.
This book makes convincing claims that most people give too little thought to an activity that occupies a large fraction of our life.
It has lots of little pieces of information which can be read as independent essays. Here are some claims I found interesting:

  • “sleepiness is responsible for far more deaths on the roads than alcohol or drugs”.
  • Tired people rate their abilities higher than people who slept well do.
  • Poor sleep contributes to poor health a good deal more than medical diagnoses suggest, but hospitals are designed in ways that hinder patients’ sleep.
  • Idle time was apparently a status symbol up to a century ago, now being busy is a status symbol. This should have economic implications that someone ought to explore in depth.
  • People in a vegetative state have REM sleep. This sounds like cause to re-evaluate the label we apply to that state.

While the book has many references, it doesn’t connect specific claims to references, and I’m sometimes left wondering why I should believe a claim. How can boredom be a modern concept? When he says “no person has ever gone completely without sleep for more than a few days”, how does he know he can dismiss people who claim to have not slept for years?

Book review: Seeing Red: A Study in Consciousness (Mind/Brain/Behavior Initiative) by Nicholas Humphrey,
This book provides a clear and simple description of phenomena that are often described as qualia, and a good guess about how and why they might have evolved as convenient ways for one part of a brain to get useful information from other parts. It uses examples of blindsight to clarify the difference between using sensory input and being aware of that input.
I liked the description of consciousness as being “temporally thick” rather than being about an instantaneous “now”, suggesting that it includes pieces of short-term memory and possibly predictions about the next few seconds.
The book won’t stop people from claiming that there’s still something mysterious about qualia, but it will make it hard for them to claim that they have a well-posed question that hasn’t been answered. It avoids most debates over meanings of words by usually sticking to simpler and less controversial words than qualia, and only using the word consciousness in ways that are relatively uncontroversial.
The book is short and readable, yet the important parts of it are concise enough that it could be adequately expressed in a shorter essay.

Book review: Beyond AI: Creating the Conscience of the Machine by J. Storrs Hall
The first two thirds of this book survey current knowledge of AI and make some guesses about when and how it will take off. This part is more eloquent than most books on similar subjects, and its somewhat different from normal perspective makes it worth reading if you are reading several books on the subject. But ease of reading is the only criterion by which this section stands out as better than competing books.
The last five chapters that are surprisingly good, and should shame most professional philosophers whose writings by comparison are a waste of time.
His chapter on consciousness, qualia, and related issues is more concise and persuasive than anything else I’ve read on these subjects. It’s unlikely to change the opinions of people who have already thought about these subjects, but it’s an excellent place for people who are unfamiliar with them to start.
His discussions of ethics using game theory and evolutionary pressures is an excellent way to frame ethical discussions.
My biggest disappointment was that he starts to recognize a possibly important risk of AI when he says “disparities among the abilities of AIs … could negate the evolutionary pressure to reciprocal altruism”, but then seems to dismiss that thoughtlessly (“The notion of one single AI taking off and obtaining hegemony over the whole world by its own efforts is ludicrous”).
He probably has semi-plausible grounds for dismissing some of the scenarios of this nature that have been proposed (e.g. the speed at which some people imagine an AI would take off is improbable). But if AIs with sufficiently general purpose intelligence enhance their intelligence at disparate rates for long enough, the results would render most of the book’s discussion of ethics irrelevant. The time it took humans to accumulate knowledge didn’t give Neanderthals much opportunity to adapt. Would the result have been different if Neanderthals had learned to trade with humans? The answer is not obvious, and probably depends on Neanderthal learning abilities in ways that I don’t know how to analyze.
Also, his arguments for optimism aren’t quite as strong as he thinks. His point that career criminals are generally of low intelligence is reassuring if the number of criminals is all that matters. But when the harm done by one relatively smart criminal can be very large (e.g. Mao), it’s hard to say that the number of criminals is all that matters.
Here’s a nice quote from Mencken which this book quotes part of:

Moral certainty is always a sign of cultural inferiority. The more uncivilized the man, the surer he is that he knows precisely what is right and what is wrong. All human progress, even in morals, has been the work of men who have doubted the current moral values, not of men who have whooped them up and tried to enforce them. The truly civilized man is always skeptical and tolerant, in this field as in all others. His culture is based on ‘I am not too sure.’

Another interesting tidbit is the anecdote that H.G. Wells predicted in 1907 that flying machines would be built. In spite of knowing a lot about attempts to build them, he wasn’t aware that the Wright brothers had succeeded in 1903.
If an AI started running in 2003 that has accumulated the knowledge of a 4-year old human and has the ability to continue learning at human or faster speeds, would we have noticed? Or would the reports we see about it sound too much like the reports of failed AIs for us to pay attention?