neuroscience

All posts tagged neuroscience

I’ve been dedicating a fair amount of my time recently to investigating whole brain emulation (WBE).

As computational power continues to grow, the feasibility of emulating a human brain at a reasonable speed becomes increasingly plausible.

While the connectome data alone seems insufficient to fully capture and replicate human behavior, recent advancements in scanning technology have provided valuable insights into distinguishing different types of neural connections. I’ve heard suggestions that combining this neuron-scale data with higher-level information, such as fMRI or EEG, might hold the key to unlocking WBE. However, the evidence is not yet conclusive enough for me to make any definitive statements.

I’ve heard some talk about a new company aiming to achieve WBE within the next five years. While this timeline aligns suspiciously with the typical venture capital horizon for industries with weak patent protection, I believe there is a non-negligible chance of success within the next decade – perhaps exceeding 10%. As a result, I’m actively exploring investment opportunities in this company.

There has also been speculation about the potential of WBE to aid in AI alignment efforts. However, I remain skeptical about this prospect. For WBE to make a significant impact on AI alignment, it would require not only an acceleration in WBE progress but also a slowdown in AI capability advances as they approach human levels or the assumption that the primary risks from AI emerge only when it substantially surpasses human intelligence.

My primary motivation for delving into WBE stems from a personal desire to upload my own mind. The potential benefits of WBE for those who choose not to upload remain uncertain, and I’m uncertain how to predict its broader societal implications.

Here are some videos that influenced my recent increased interest. Note that I’m relying heavily on the reputations of the speakers when deciding how much weight to give to their opinions.

Some relevant prediction markets:

Additionally, I’ve been working on some of the suggestions mentioned in the first video. I’m sharing my code and analysis on Colab. My aim is to evaluate the resilience of language models to the types of errors that might occur during the brain scanning process. While the results provide some reassurance, their value heavily relies on assumptions about the importance of low-confidence guesses made by the emulated mind.

Book review: Surfing Uncertainty: Prediction, Action, and the Embodied Mind, by Andy Clark.

Surfing Uncertainty describes minds as hierarchies of prediction engines. Most behavior involves interactions between a stream of information that uses low-level sensory data to adjust higher level predictive models of the world, and another stream of data coming from high-level models that guides low-level sensory processes to better guess the most likely interpretations of ambiguous sensory evidence.

Clark calls this a predictive processing (PP) model; others refer to is as predictive coding.

The book is full of good ideas, presented in a style that sapped my curiosity.

Jeff Hawkins has a more eloquent book about PP (On Intelligence), which focuses on how PP might be used to create artificial intelligence. The underwhelming progress of the company Hawkins started to capitalize on these ideas suggests it wasn’t the breakthrough that AI researchers were groping for. In contrast, Clark focuses on how PP helps us understand existing minds.

The PP model clearly has some value. The book was a bit more thorough than I wanted at demonstrating that. Since I didn’t find that particularly new or surprising, I’ll focus most of this review on a few loose threads that the book left dangling. So don’t treat this as a summary of the book (see Slate Star Codex if you want that, or if my review is too cryptic to understand), but rather as an exploration of the questions that the book provoked me to think about.

Continue Reading

[Warning: long post, of uncertain value, with annoyingly uncertain conclusions.]

This post will focus on how hardware (cpu power) will affect AGI timelines. I will undoubtedly overlook some important considerations; this is just a model of some important effects that I understand how to analyze.

I’ll make some effort to approach this as if I were thinking about AGI timelines for the first time, and focusing on strategies that I use in other domains.

I’m something like 60% confident that the most important factor in the speed of AI takeoff will be the availability of computing power.

I’ll focus here on the time to human-level AGI, but I suspect this reasoning implies getting from there to superintelligence at speeds that Bostrom would classify as slow or moderate.
Continue Reading

Book review: Into the Gray Zone: A Neuroscientist Explores the Border Between Life and Death, by Adrian Owen.

Too many books and talks have gratuitous displays of fMRIs and neuroscience. At last, here’s a book where fMRIs are used with fairly good reason, and neuroscience is explained only when that’s appropriate.

Owen provides evidence of near-normal brain activity in a modest fraction of people who had been classified as being in a persistent vegetative state. They are capable of answering yes or no to most questions, and show signs of understanding the plots of movies.

Owen believes this evidence is enough to say they’re conscious. I suspect he’s mostly right about that, and that they do experience much of the brain function that is typically associated with consciousness. Owen doesn’t have any special insights into what we mean by the word consciousness. He mostly just investigates how to distinguish between near-normal mental activity and seriously impaired mental activity.

So what were neurologists previously using to classify people as vegetative? As far as I can tell, they were diagnosing based on a lack of motor responses, even though they were aware of an alternate diagnosis, total locked-in syndrome, with identical symptoms. Locked-in syndrome and persistent vegetative state were both coined (in part) by the same person (but I’m unclear who coined the term total locked-in syndrome).

My guess is that the diagnoses have been influenced by a need for certainty. (whose need? family members? doctors? It’s not obvious).

The book has a bunch of mostly unremarkable comments about ethics. But I was impressed by Owen’s observation that people misjudge whether they’d want to die if they end up in a locked-in state. So how likely is it they’ll mispredict what they’d want in other similar conditions? I should have deduced this from the book stumbling on happiness, but I failed to think about it.

I’m a bit disturbed by Owen’s claim that late-stage Alzheimer’s patients have no sense of self. He doesn’t cite evidence for this conclusion, and his research should hint to him that it would be quite hard to get good evidence on this subject.

Most books written by scientists who made interesting discoveries attribute the author’s success to their competence. This book provides clear evidence for the accidental nature of at least some science. Owen could easily have gotten no signs of consciousness from the first few patients he scanned. Given the effort needed for the scans, I can imagine that that would have resulted in a mistaken consensus of experts that vegetative states were being diagnosed correctly.

Book review: The Human Advantage: A New Understanding of How Our Brain Became Remarkable, by Suzana Herculano-Houzel.

I used to be uneasy about claims that the human brain was special because it is large for our body size: relative size just didn’t seem like it could be the best measure of whatever enabled intelligence.

At last, Herculano-Houzel has invented a replacement for that measure. Her impressive technique for measuring the number of neurons in a brain has revolutionized this area of science.

We can now see an important connection between the number of cortical neurons and cognitive ability. I’m glad that the book reports on research that compares the cognitive abilities of enough species to enable moderately objective tests of the relevant hypotheses (although the research still has much room for improvement).

We can also see that the primate brain is special, in a way that enables large primates to be smarter than similarly sized nonprimates. And that humans are not very special for a primate of our size, although energy constraints make it tricky for primates to reach our size.

I was able to read the book quite quickly. Much of it is arranged in an occasionally suspenseful story about how the research was done. It doesn’t have lots of information, but the information it does have seems very new (except for the last two chapters, where Herculano-Houzel gets farther from her area of expertise).

Added 2016-08-25:
Wikipedia has a List of animals by number of neurons which lists the long-finned pilot whale as having 37.2 billion cortical neurons, versus 21 billion for humans.

The paper reporting that result disagrees somewhat with Herculano-Houzel:

Our results underscore that correlations between cognitive performance and absolute neocortical neuron numbers across animal orders or classes are of limited value, and attempts to quantify the mental capacity of a dolphin for cross-species comparisons are bound to be controversial.

But I don’t see much of an argument against the correlation between intelligence and cortical neuron numbers. The lack of good evidence about long-finned pilot whale intelligence mainly implies we ought to be uncertain.

Connectomes are not sufficient by themselves to model brain behavior. Brain modeling has been limited more by the need for good information about the dynamic behavior of individual neurons.

The paper Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans looks like an important step toward overcoming this limitation. The authors observed the behavior of many individual neurons in a moving nematode.

They still can’t reliably map the neurons they observed to standard C. elegans neuron names:

The neural position validation experiments presented here, however, have led us to conclude that worm-to-worm variability in neuronal position in the head is large enough to pose a formidable challenge for neuron identification.

But there are enough hints about which neurons do what that I’m confident this problem can be solved if enough effort is devoted to it.

My biggest uncertainty concerns applying this approach to mammalian brains. Mammalian brains aren’t transparent enough to be imaged this way. Are C. elegans neurons similar enough that we can just apply the same models to both? I suspect not.

Book review: The Eureka Factor: Aha Moments, Creative Insight, and the Brain, by John Kounios and Mark Beeman.

This book shows that insight and analysis are different modes of thought, and that small interventions can influence how insightful we are. It’s done in a clearly analytical (not insightful) style.

They devote a good deal of effort to demonstrating that the two modes of thought differ in more ways than simply how people report them. It’s unclear why that would surprise anyone now that behaviorism is unpopular. Nor is it clear what use we can make of evidence that different parts of the brain are involved in the two modes.

I’m mildly impressed that researchers are able to objectively measure insight at all. They mostly study word problems that can be solved on something like 30 seconds. They provide some hints that those experiments study the same patterns of thought that are used to solve big tasks that simmer in our subconscious for days. But there’s some risk that the research is overlooking something unique to those harder problems.

The “creativity crisis” could have been an important part of the book. But their brief explanation is to blame the obvious suspects: environments of constant stimulation due to social media, cellphones, games, etc.

One problem with that explanation is that the decline in creativity scores since 1990 is strongest in kindergartners through 3rd graders. I don’t find it very plausible that they’ve experienced a larger increase in those hyper-stimuli than older kids have.

It’s almost as if the authors got their understanding of the alleged crisis from a blog post rather than from the peer reviewed article that they cite.

The peer reviewed article suggests a better explanation: less time for free play.

Outdoor activity activity is valuable, according to the book, at least for short-term changes in whether our mood is creative. The “crisis” could be due to less recess time at school and a decline in free-range parenting. Were the tests taken shortly after a recess up through 1990, and taken after hours of lectures more recently? If so, the decline in measured creativity would reflect mostly short-term mood changes, leaving me uncertain whether I should worry about longer lasting effects.

The book provides some advice for being more insightful. It has caused me to schedule tasks that might require creativity after moderate hikes, or earlier in the day than I previously did.

The book has made me more likely to try applying ideas from the CFAR Againstness class to inducing creative moods.

The book hints at lots of room for computer games to promote a more insightful mood than the typical game does (e.g. via requiring players to expand their attention to fill the screen). But the authors aren’t very helpful at suggesting ways to identify games that are more insight-compatible. The closest I’ve come to practical ideas about games is that I ought to replace them when possible with fiction that promotes far-mode thinking(i.e. fantasy and science fiction).

My intuition says that insight research is still in its infancy, and that we should hope for better books in this category before long.

Book review: The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition, by Gregory Hickok.

This book criticizes hype from scientists and the media about embodied cognition, mirror neurons, and the differences between the left and right brain hemispheres. Popular accounts of these ideas contain a little bit of truth, but most versions either explain very little or provide misleading explanations.

A good deal of our cognition is embodied in the sense that it’s heavily dependent on sensory and motor activity. But we have many high-level thoughts that don’t fit this model well, such as those we generate when we don’t have sensory or motor interactions that are worth our attention (often misleading called a “resting state”).

Humans probably have mirror neurons. They have some value in helping us imitate others. But that doesn’t mean they have much affect on our ability to understand what we’re imitating. Our ability to understand a dog wagging its tail isn’t impaired by our inability to wag our tails. Parrots’ ability to imitate our speech isn’t very effective at helping them understand it.

Mirror neurons have also been used to promote the “broken mirror theory” of autism (with the suggestion that a malfunction related to mirror neurons impairs empathy). Hickok shows that the intense world hypothesis (which I’ve blogged about before) is more consistent with the available evidence.

The book clarified my understanding of the brain a bit. But most of it seems unimportant. I had sort of accepted mild versions of the mirror neuron and left-brain, right brain hype, but doing so didn’t have any obvious effects on my other beliefs or my actions. It was only at the book’s end (discussing autism) that I could see how the hype might matter.

Most of the ideas that he criticizes don’t do much harm, because they wouldn’t pay much rent if true. Identifying which neurons do what has negligible effect on how I model a person’s mind unless I’m doing something unusual like brain surgery.

Book review: Self Comes to Mind: Constructing the Conscious Brain by Antonio R. Damasio.

This book describes many aspects of human minds in ways that aren’t wrong, but the parts that seem novel don’t have important implications.

He devotes a sizable part of the book to describing how memory works, but I don’t understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that “it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self”. There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio’s notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.

Book review: The Ego Tunnel: The Science of the Mind and the Myth of the Self, by Thomas Metzinger.

This book describes aspects of consciousness in ways that are often, but not consistently, clear and informative. His ideas are not revolutionary, but will clarify our understanding.

I didn’t find his tunnel metaphor very helpful.

I like his claim that “conscious information is exactly that information that must be made available for every single one of your cognitive capacities at the same time”. That may be an exaggeration, but it describes an important function of consciousness.

He makes surprisingly clear and convincing arguments that there are degrees of consciousness, so that some other species probably have some but not all of what we think of as human consciousness. He gives interesting examples of ways that humans can be partially conscious, e.g. people with Cotard’s Syndrome can deny their own existence.

His discussion of ethical implications of neuroscience points out some important issues to consider, but I’m unimpressed with his conclusion that we shouldn’t create conscious machines. He relies on something resembling the Precautionary Principle that says we should never risk causing suffering in an artificial entity. As far as I can tell, the same reasoning would imply that having children is unethical because they might suffer.