neuroscience

All posts tagged neuroscience

Book review: Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines and How It Will Change Our Lives, by Miguel Nicolelis.

This book presents some ambitious visions of how our lives will be changed by brain-machine and brain to brain (“mind meld”) interfaces, along with some good reasons to hope that we will adapt well to them and think of machines and other people as if they are parts of our body. Many people will have trouble accepting his broad notion of personal identity, but I doubt they will find good arguments against it.

But I wish I’d skipped most of the first half, which focuses on the history of neuroscience research, with too much attention to debates over the extent to which brain functions are decentralized.

He’s disappointingly vague about the obstacles that researchers face. He hints at problems with how safe and durable an interface can be, but doesn’t tell us how serious they are, whether progress is being made on them, etc. I also wanted more specific data about how much information could be communicated each way, how precisely robotic positioning can be controlled, and how much of a trend there is toward improving those.

Book review: Going Inside: A Tour Round a Single Moment of Consciousness, by John McCrone.

This book improved my understanding of how various parts of the brain interact, and of how long it takes the brain to process and react to sensory data. But there were many times when I wondered whether it was worth finishing, and I wish I had given up before the last few chapters that focused on consciousness other than neuroscience.

Too much of the book is devoted to attacking naive versions of reductionism and computational models of the brain. His claim that “chaos theory electrified science” is wrong. It electrified some reports about science, but has done little to create better models or testable predictions.

It’s misleading for him to claim the difference between human and animal consciousness “is terribly simple. Animals are locked into the present tense.” There are many hints that animals have some thoughts about the future and past, and it’s hard enough to evaluate those thoughts that we need to be cautious about denying that they think like us. He suggests that language and grammar provide unique abilities to think about the future. But I’m fairly sure I can analyze the future without using language, using mostly visual processing to plan a route I’m going to kayak through some rapids, or to imagine an opponent’s next chess move. I expect animals have some abilities along those lines. Human language must provide some improved ability to think about the future, but I find it hard to specify those abilities.

Book Review: On Intelligence by Jeff Hawkins

This book presents strong arguments that prediction is a more important part of intelligence than most experts realize. It outlines a fairly simple set of general purpose rules that may describe some important aspects of how small groups of neurons interact to produce intelligent behavior. It provides a better theory of the role of the hippocampus than I’ve seen before.
I wouldn’t call this book a major breakthrough, but I expect that it will produce some nontrivial advances in the understanding of the human brain.
The most disturbing part of this book is the section on the risks of AI. He claims that AIs will just be tools, but he shows no sign of having given thought to any of the issues involved beyond deciding that an AI is unlikely to have human motives. But that leaves a wide variety of other possible goals systems, many of which would be as dangerous. It’s possible that he sees easy ways to ensure that an AI is always obedient, but there are many approaches to AI for which I don’t think this is possible (for instance, evolutionary programming looks like it would select for something resembling a survival instinct), and this book doesn’t clarify what goals Hawkins’ approach is likely to build into his software. It is easy to imagine that he would need to build in goals other than obedience in order to get his system to do any learning. If this is any indication of the care he is taking to ensure that his “tools” are safe, I hope he fails to produce intelligent software.
For more discussion of AI risks, see sl4.org. In particular, I have a description there of how one might go about safely implementing an obedient AI. At the time I was thinking of Pei Wang’s NARS as the best approach to AI, and with that approach it seems natural for an AI to have no goals that are inconsistent with obedience. But Hawkins’ approach seems approximately as powerful as NARS, but more likely to tempt designers into building in goals other than obedience.