consciousness

All posts tagged consciousness

Book review: Surfing Uncertainty: Prediction, Action, and the Embodied Mind, by Andy Clark.

Surfing Uncertainty describes minds as hierarchies of prediction engines. Most behavior involves interactions between a stream of information that uses low-level sensory data to adjust higher level predictive models of the world, and another stream of data coming from high-level models that guides low-level sensory processes to better guess the most likely interpretations of ambiguous sensory evidence.

Clark calls this a predictive processing (PP) model; others refer to is as predictive coding.

The book is full of good ideas, presented in a style that sapped my curiosity.

Jeff Hawkins has a more eloquent book about PP (On Intelligence), which focuses on how PP might be used to create artificial intelligence. The underwhelming progress of the company Hawkins started to capitalize on these ideas suggests it wasn’t the breakthrough that AI researchers were groping for. In contrast, Clark focuses on how PP helps us understand existing minds.

The PP model clearly has some value. The book was a bit more thorough than I wanted at demonstrating that. Since I didn’t find that particularly new or surprising, I’ll focus most of this review on a few loose threads that the book left dangling. So don’t treat this as a summary of the book (see Slate Star Codex if you want that, or if my review is too cryptic to understand), but rather as an exploration of the questions that the book provoked me to think about.

Continue Reading

Book review: Into the Gray Zone: A Neuroscientist Explores the Border Between Life and Death, by Adrian Owen.

Too many books and talks have gratuitous displays of fMRIs and neuroscience. At last, here’s a book where fMRIs are used with fairly good reason, and neuroscience is explained only when that’s appropriate.

Owen provides evidence of near-normal brain activity in a modest fraction of people who had been classified as being in a persistent vegetative state. They are capable of answering yes or no to most questions, and show signs of understanding the plots of movies.

Owen believes this evidence is enough to say they’re conscious. I suspect he’s mostly right about that, and that they do experience much of the brain function that is typically associated with consciousness. Owen doesn’t have any special insights into what we mean by the word consciousness. He mostly just investigates how to distinguish between near-normal mental activity and seriously impaired mental activity.

So what were neurologists previously using to classify people as vegetative? As far as I can tell, they were diagnosing based on a lack of motor responses, even though they were aware of an alternate diagnosis, total locked-in syndrome, with identical symptoms. Locked-in syndrome and persistent vegetative state were both coined (in part) by the same person (but I’m unclear who coined the term total locked-in syndrome).

My guess is that the diagnoses have been influenced by a need for certainty. (whose need? family members? doctors? It’s not obvious).

The book has a bunch of mostly unremarkable comments about ethics. But I was impressed by Owen’s observation that people misjudge whether they’d want to die if they end up in a locked-in state. So how likely is it they’ll mispredict what they’d want in other similar conditions? I should have deduced this from the book stumbling on happiness, but I failed to think about it.

I’m a bit disturbed by Owen’s claim that late-stage Alzheimer’s patients have no sense of self. He doesn’t cite evidence for this conclusion, and his research should hint to him that it would be quite hard to get good evidence on this subject.

Most books written by scientists who made interesting discoveries attribute the author’s success to their competence. This book provides clear evidence for the accidental nature of at least some science. Owen could easily have gotten no signs of consciousness from the first few patients he scanned. Given the effort needed for the scans, I can imagine that that would have resulted in a mistaken consensus of experts that vegetative states were being diagnosed correctly.

Or, why I don’t fear the p-zombie apocalypse.

This post analyzes concerns about how evolution, in the absence of a powerful singleton, might, in the distant future, produce what Nick Bostrom calls a “Disneyland without children”. I.e. a future with many agents, whose existence we don’t value because they are missing some important human-like quality.

The most serious description of this concern is in Bostrom’s The Future of Human Evolution. Bostrom is cautious enough that it’s hard to disagree with anything he says.

Age of Em has prompted a batch of similar concerns. Scott Alexander at SlateStarCodex has one of the better discussions (see section IV of his review of Age of Em).

People sometimes sound like they want to use this worry as an excuse to oppose the age of em scenario, but it applies to just about any scenario with human-in-a-broad-sense actors. If uploading never happens, biological evolution could produce slower paths to the same problem(s) [1]. Even in the case of a singleton AI, the singleton will need to solve the tension between evolution and our desire to preserve our values, although in that scenario it’s more important to focus on how the singleton is designed.

These concerns often assume something like the age of em lasts forever. The scenario which Age of Em analyzes seems unstable, in that it’s likely to be altered by stranger-than-human intelligence. But concerns about evolution only depend on control being sufficiently decentralized that there’s doubt about whether a central government can strongly enforce rules. That situation seems sufficiently stable to be worth analyzing.

I’ll refer to this thing we care about as X (qualia? consciousness? fun?), but I expect people will disagree on what matters for quite some time. Some people will worry that X is lost in uploading, others will worry that some later optimization process will remove X from some future generation of ems.

I’ll first analyze scenarios in which X is a single feature (in the sense that it would be lost in a single step). Later, I’ll try to analyze the other extreme, where X is something that could be lost in millions of tiny steps. Neither extreme seems likely, but I expect that analyzing the extremes will illustrate the important principles.

Continue Reading

Book review: Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, by Peter Godfrey-Smith.

This book describes some interesting mysteries, but provides little help at solving them.

It provides some pieces of a long-term perspective on the evolution of intelligence.

Cephalopods’ most recent common ancestor with vertebrates lived way back before the Cambrian explosion. Nervous systems back then were primitive enough that minds didn’t need to react to other minds, and predation was a rare accident, not something animals prepared carefully to cause and avoid.

So cephalopod intelligence evolved rather independently from most of the minds we observe. We could learn something about alien minds by understanding them.

Intelligence may even have evolved more than once in cephalopods – nobody seems to know whether octopuses evolved intelligence separately from squids/cuttlefish.

An octopus has a much less centralized mind than vertebrates do. Does an octopus have a concept of self? The book presents evidence that octopuses sometimes seem to think of their arms as parts of their self, yet hints that their concept of self is a good deal weaker than in humans, and maybe the octopus treats its arms as semi-autonomous entities.

2.

Does an octopus have color vision? Not via its photoreceptors the way many vertebrates do. Simple tests of octopuses’ ability to discriminate color also say no.

Yet octopuses clearly change color to camouflage themselves. They also change color in ways that suggest they’re communicating via a visual language. But to whom?

One speculative guess is that the color-producing parts act as color filters, with monochrome photoreceptors in the skin evaluating the color of the incoming light by how much the light is attenuated by the filters. So they “see” color with their skin, but not their eyes.

That would still leave plenty of mystery about what they’re communicating.

3.

The author’s understanding of aging implies that few organisms die of aging in the wild. He sees evidence in Octopuses that conflicts with this prediction, yet that doesn’t alert him to the growing evidence of problems with the standard theories of aging.

He says octopuses are subject to much predation. Why doesn’t this cause them to be scared of humans? He has surprising anecdotes of octopuses treating humans as friends, e.g. grabbing one and leading him on a ten-minute “tour”.

He mentions possible REM sleep in cuttlefish. That would almost certainly have evolved independently from vertebrate REM sleep, which must indicate something important.

I found the book moderately entertaining, but I was underwhelmed by the author’s expertise. The subtitle’s reference to “the Deep Origins of Consciousness” led me to expect more than I got.

Book review: Self Comes to Mind: Constructing the Conscious Brain by Antonio R. Damasio.

This book describes many aspects of human minds in ways that aren’t wrong, but the parts that seem novel don’t have important implications.

He devotes a sizable part of the book to describing how memory works, but I don’t understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that “it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self”. There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio’s notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.

Book review: The Ego Tunnel: The Science of the Mind and the Myth of the Self, by Thomas Metzinger.

This book describes aspects of consciousness in ways that are often, but not consistently, clear and informative. His ideas are not revolutionary, but will clarify our understanding.

I didn’t find his tunnel metaphor very helpful.

I like his claim that “conscious information is exactly that information that must be made available for every single one of your cognitive capacities at the same time”. That may be an exaggeration, but it describes an important function of consciousness.

He makes surprisingly clear and convincing arguments that there are degrees of consciousness, so that some other species probably have some but not all of what we think of as human consciousness. He gives interesting examples of ways that humans can be partially conscious, e.g. people with Cotard’s Syndrome can deny their own existence.

His discussion of ethical implications of neuroscience points out some important issues to consider, but I’m unimpressed with his conclusion that we shouldn’t create conscious machines. He relies on something resembling the Precautionary Principle that says we should never risk causing suffering in an artificial entity. As far as I can tell, the same reasoning would imply that having children is unethical because they might suffer.

Book review: Going Inside: A Tour Round a Single Moment of Consciousness, by John McCrone.

This book improved my understanding of how various parts of the brain interact, and of how long it takes the brain to process and react to sensory data. But there were many times when I wondered whether it was worth finishing, and I wish I had given up before the last few chapters that focused on consciousness other than neuroscience.

Too much of the book is devoted to attacking naive versions of reductionism and computational models of the brain. His claim that “chaos theory electrified science” is wrong. It electrified some reports about science, but has done little to create better models or testable predictions.

It’s misleading for him to claim the difference between human and animal consciousness “is terribly simple. Animals are locked into the present tense.” There are many hints that animals have some thoughts about the future and past, and it’s hard enough to evaluate those thoughts that we need to be cautious about denying that they think like us. He suggests that language and grammar provide unique abilities to think about the future. But I’m fairly sure I can analyze the future without using language, using mostly visual processing to plan a route I’m going to kayak through some rapids, or to imagine an opponent’s next chess move. I expect animals have some abilities along those lines. Human language must provide some improved ability to think about the future, but I find it hard to specify those abilities.

Book review: Counting Sheep: The Science and Pleasures of Sleep and Dreams by Paul Martin.
This book makes convincing claims that most people give too little thought to an activity that occupies a large fraction of our life.
It has lots of little pieces of information which can be read as independent essays. Here are some claims I found interesting:

  • “sleepiness is responsible for far more deaths on the roads than alcohol or drugs”.
  • Tired people rate their abilities higher than people who slept well do.
  • Poor sleep contributes to poor health a good deal more than medical diagnoses suggest, but hospitals are designed in ways that hinder patients’ sleep.
  • Idle time was apparently a status symbol up to a century ago, now being busy is a status symbol. This should have economic implications that someone ought to explore in depth.
  • People in a vegetative state have REM sleep. This sounds like cause to re-evaluate the label we apply to that state.

While the book has many references, it doesn’t connect specific claims to references, and I’m sometimes left wondering why I should believe a claim. How can boredom be a modern concept? When he says “no person has ever gone completely without sleep for more than a few days”, how does he know he can dismiss people who claim to have not slept for years?

Book review: Seeing Red: A Study in Consciousness (Mind/Brain/Behavior Initiative) by Nicholas Humphrey,
This book provides a clear and simple description of phenomena that are often described as qualia, and a good guess about how and why they might have evolved as convenient ways for one part of a brain to get useful information from other parts. It uses examples of blindsight to clarify the difference between using sensory input and being aware of that input.
I liked the description of consciousness as being “temporally thick” rather than being about an instantaneous “now”, suggesting that it includes pieces of short-term memory and possibly predictions about the next few seconds.
The book won’t stop people from claiming that there’s still something mysterious about qualia, but it will make it hard for them to claim that they have a well-posed question that hasn’t been answered. It avoids most debates over meanings of words by usually sticking to simpler and less controversial words than qualia, and only using the word consciousness in ways that are relatively uncontroversial.
The book is short and readable, yet the important parts of it are concise enough that it could be adequately expressed in a shorter essay.