neuroscience

All posts tagged neuroscience

Book review: The Human Advantage: A New Understanding of How Our Brain Became Remarkable, by Suzana Herculano-Houzel.

I used to be uneasy about claims that the human brain was special because it is large for our body size: relative size just didn’t seem like it could be the best measure of whatever enabled intelligence.

At last, Herculano-Houzel has invented a replacement for that measure. Her impressive technique for measuring the number of neurons in a brain has revolutionized this area of science.

We can now see an important connection between the number of cortical neurons and cognitive ability. I’m glad that the book reports on research that compares the cognitive abilities of enough species to enable moderately objective tests of the relevant hypotheses (although the research still has much room for improvement).

We can also see that the primate brain is special, in a way that enables large primates to be smarter than similarly sized nonprimates. And that humans are not very special for a primate of our size, although energy constraints make it tricky for primates to reach our size.

I was able to read the book quite quickly. Much of it is arranged in an occasionally suspenseful story about how the research was done. It doesn’t have lots of information, but the information it does have seems very new (except for the last two chapters, where Herculano-Houzel gets farther from her area of expertise).

Added 2016-08-25:
Wikipedia has a List of animals by number of neurons which lists the long-finned pilot whale as having 37.2 billion cortical neurons, versus 21 billion for humans.

The paper reporting that result disagrees somewhat with Herculano-Houzel:

Our results underscore that correlations between cognitive performance and absolute neocortical neuron numbers across animal orders or classes are of limited value, and attempts to quantify the mental capacity of a dolphin for cross-species comparisons are bound to be controversial.

But I don’t see much of an argument against the correlation between intelligence and cortical neuron numbers. The lack of good evidence about long-finned pilot whale intelligence mainly implies we ought to be uncertain.

Connectomes are not sufficient by themselves to model brain behavior. Brain modeling has been limited more by the need for good information about the dynamic behavior of individual neurons.

The paper Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans looks like an important step toward overcoming this limitation. The authors observed the behavior of many individual neurons in a moving nematode.

They still can’t reliably map the neurons they observed to standard C. elegans neuron names:

The neural position validation experiments presented here, however, have led us to conclude that worm-to-worm variability in neuronal position in the head is large enough to pose a formidable challenge for neuron identification.

But there are enough hints about which neurons do what that I’m confident this problem can be solved if enough effort is devoted to it.

My biggest uncertainty concerns applying this approach to mammalian brains. Mammalian brains aren’t transparent enough to be imaged this way. Are C. elegans neurons similar enough that we can just apply the same models to both? I suspect not.

Book review: The Eureka Factor: Aha Moments, Creative Insight, and the Brain, by John Kounios and Mark Beeman.

This book shows that insight and analysis are different modes of thought, and that small interventions can influence how insightful we are. It’s done in a clearly analytical (not insightful) style.

They devote a good deal of effort to demonstrating that the two modes of thought differ in more ways than simply how people report them. It’s unclear why that would surprise anyone now that behaviorism is unpopular. Nor is it clear what use we can make of evidence that different parts of the brain are involved in the two modes.

I’m mildly impressed that researchers are able to objectively measure insight at all. They mostly study word problems that can be solved on something like 30 seconds. They provide some hints that those experiments study the same patterns of thought that are used to solve big tasks that simmer in our subconscious for days. But there’s some risk that the research is overlooking something unique to those harder problems.

The “creativity crisis” could have been an important part of the book. But their brief explanation is to blame the obvious suspects: environments of constant stimulation due to social media, cellphones, games, etc.

One problem with that explanation is that the decline in creativity scores since 1990 is strongest in kindergartners through 3rd graders. I don’t find it very plausible that they’ve experienced a larger increase in those hyper-stimuli than older kids have.

It’s almost as if the authors got their understanding of the alleged crisis from a blog post rather than from the peer reviewed article that they cite.

The peer reviewed article suggests a better explanation: less time for free play.

Outdoor activity activity is valuable, according to the book, at least for short-term changes in whether our mood is creative. The “crisis” could be due to less recess time at school and a decline in free-range parenting. Were the tests taken shortly after a recess up through 1990, and taken after hours of lectures more recently? If so, the decline in measured creativity would reflect mostly short-term mood changes, leaving me uncertain whether I should worry about longer lasting effects.

The book provides some advice for being more insightful. It has caused me to schedule tasks that might require creativity after moderate hikes, or earlier in the day than I previously did.

The book has made me more likely to try applying ideas from the CFAR Againstness class to inducing creative moods.

The book hints at lots of room for computer games to promote a more insightful mood than the typical game does (e.g. via requiring players to expand their attention to fill the screen). But the authors aren’t very helpful at suggesting ways to identify games that are more insight-compatible. The closest I’ve come to practical ideas about games is that I ought to replace them when possible with fiction that promotes far-mode thinking(i.e. fantasy and science fiction).

My intuition says that insight research is still in its infancy, and that we should hope for better books in this category before long.

Book review: The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition, by Gregory Hickok.

This book criticizes hype from scientists and the media about embodied cognition, mirror neurons, and the differences between the left and right brain hemispheres. Popular accounts of these ideas contain a little bit of truth, but most versions either explain very little or provide misleading explanations.

A good deal of our cognition is embodied in the sense that it’s heavily dependent on sensory and motor activity. But we have many high-level thoughts that don’t fit this model well, such as those we generate when we don’t have sensory or motor interactions that are worth our attention (often misleading called a “resting state”).

Humans probably have mirror neurons. They have some value in helping us imitate others. But that doesn’t mean they have much affect on our ability to understand what we’re imitating. Our ability to understand a dog wagging its tail isn’t impaired by our inability to wag our tails. Parrots’ ability to imitate our speech isn’t very effective at helping them understand it.

Mirror neurons have also been used to promote the “broken mirror theory” of autism (with the suggestion that a malfunction related to mirror neurons impairs empathy). Hickok shows that the intense world hypothesis (which I’ve blogged about before) is more consistent with the available evidence.

The book clarified my understanding of the brain a bit. But most of it seems unimportant. I had sort of accepted mild versions of the mirror neuron and left-brain, right brain hype, but doing so didn’t have any obvious effects on my other beliefs or my actions. It was only at the book’s end (discussing autism) that I could see how the hype might matter.

Most of the ideas that he criticizes don’t do much harm, because they wouldn’t pay much rent if true. Identifying which neurons do what has negligible effect on how I model a person’s mind unless I’m doing something unusual like brain surgery.

Book review: Self Comes to Mind: Constructing the Conscious Brain by Antonio R. Damasio.

This book describes many aspects of human minds in ways that aren’t wrong, but the parts that seem novel don’t have important implications.

He devotes a sizable part of the book to describing how memory works, but I don’t understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that “it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self”. There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio’s notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.

Book review: The Ego Tunnel: The Science of the Mind and the Myth of the Self, by Thomas Metzinger.

This book describes aspects of consciousness in ways that are often, but not consistently, clear and informative. His ideas are not revolutionary, but will clarify our understanding.

I didn’t find his tunnel metaphor very helpful.

I like his claim that “conscious information is exactly that information that must be made available for every single one of your cognitive capacities at the same time”. That may be an exaggeration, but it describes an important function of consciousness.

He makes surprisingly clear and convincing arguments that there are degrees of consciousness, so that some other species probably have some but not all of what we think of as human consciousness. He gives interesting examples of ways that humans can be partially conscious, e.g. people with Cotard’s Syndrome can deny their own existence.

His discussion of ethical implications of neuroscience points out some important issues to consider, but I’m unimpressed with his conclusion that we shouldn’t create conscious machines. He relies on something resembling the Precautionary Principle that says we should never risk causing suffering in an artificial entity. As far as I can tell, the same reasoning would imply that having children is unethical because they might suffer.

Book review: Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines and How It Will Change Our Lives, by Miguel Nicolelis.

This book presents some ambitious visions of how our lives will be changed by brain-machine and brain to brain (“mind meld”) interfaces, along with some good reasons to hope that we will adapt well to them and think of machines and other people as if they are parts of our body. Many people will have trouble accepting his broad notion of personal identity, but I doubt they will find good arguments against it.

But I wish I’d skipped most of the first half, which focuses on the history of neuroscience research, with too much attention to debates over the extent to which brain functions are decentralized.

He’s disappointingly vague about the obstacles that researchers face. He hints at problems with how safe and durable an interface can be, but doesn’t tell us how serious they are, whether progress is being made on them, etc. I also wanted more specific data about how much information could be communicated each way, how precisely robotic positioning can be controlled, and how much of a trend there is toward improving those.

Book review: Going Inside: A Tour Round a Single Moment of Consciousness, by John McCrone.

This book improved my understanding of how various parts of the brain interact, and of how long it takes the brain to process and react to sensory data. But there were many times when I wondered whether it was worth finishing, and I wish I had given up before the last few chapters that focused on consciousness other than neuroscience.

Too much of the book is devoted to attacking naive versions of reductionism and computational models of the brain. His claim that “chaos theory electrified science” is wrong. It electrified some reports about science, but has done little to create better models or testable predictions.

It’s misleading for him to claim the difference between human and animal consciousness “is terribly simple. Animals are locked into the present tense.” There are many hints that animals have some thoughts about the future and past, and it’s hard enough to evaluate those thoughts that we need to be cautious about denying that they think like us. He suggests that language and grammar provide unique abilities to think about the future. But I’m fairly sure I can analyze the future without using language, using mostly visual processing to plan a route I’m going to kayak through some rapids, or to imagine an opponent’s next chess move. I expect animals have some abilities along those lines. Human language must provide some improved ability to think about the future, but I find it hard to specify those abilities.

Book Review: On Intelligence by Jeff Hawkins

This book presents strong arguments that prediction is a more important part of intelligence than most experts realize. It outlines a fairly simple set of general purpose rules that may describe some important aspects of how small groups of neurons interact to produce intelligent behavior. It provides a better theory of the role of the hippocampus than I’ve seen before.
I wouldn’t call this book a major breakthrough, but I expect that it will produce some nontrivial advances in the understanding of the human brain.
The most disturbing part of this book is the section on the risks of AI. He claims that AIs will just be tools, but he shows no sign of having given thought to any of the issues involved beyond deciding that an AI is unlikely to have human motives. But that leaves a wide variety of other possible goals systems, many of which would be as dangerous. It’s possible that he sees easy ways to ensure that an AI is always obedient, but there are many approaches to AI for which I don’t think this is possible (for instance, evolutionary programming looks like it would select for something resembling a survival instinct), and this book doesn’t clarify what goals Hawkins’ approach is likely to build into his software. It is easy to imagine that he would need to build in goals other than obedience in order to get his system to do any learning. If this is any indication of the care he is taking to ensure that his “tools” are safe, I hope he fails to produce intelligent software.
For more discussion of AI risks, see sl4.org. In particular, I have a description there of how one might go about safely implementing an obedient AI. At the time I was thinking of Pei Wang’s NARS as the best approach to AI, and with that approach it seems natural for an AI to have no goals that are inconsistent with obedience. But Hawkins’ approach seems approximately as powerful as NARS, but more likely to tempt designers into building in goals other than obedience.