Archives

All posts by Peter

Book review: The Finders, by Jeffery A Martin.

This book is about the states of mind that Martin labels Fundamental Wellbeing.

These seem to be what people seek through meditation, but Martin carefully avoids focusing on Buddhism, and says that other spiritual approaches produce similar states of mind.

Martin approaches the subject as if he were an anthropologist. I expect that’s about as rigorous as we should hope for on many of the phenomena that he studies.

The most important change associated with Fundamental Wellbeing involves the weakening or disappearance of the Narrative-Self (i.e. the voice that seems to be the center of attention in most human minds).

I’ve experienced a weak version of that. Through a combination of meditation and CFAR ideas (and maybe The Mating Mind, which helped me think of the Narrative-Self as more of a press secretary than as a leader), I’ve substantially reduced the importance that my brain attaches to my Narrative-Self, and that has significantly reduced how much I’m bothered by negative stimuli.

Some more “advanced” versions of Fundamental Wellbeing also involve a loss of “self” – something along the lines of being one with the universe, or having no central locus or vantage point from which to observe the world. I don’t understand this very well. Martin suggests an analogy which describes this feeling as “zoomed-out”, i.e. the opposite extreme from Hyperfocus or a state of Flow. I guess that gives me enough hints to say that I haven’t experienced anything that’s very close to it.

I’m tempted to rephrase this as turning off what Dennett calls the Cartesian Theater. Many of the people that Martin studied seem to have discarded this illusion.

Alas, the book says little about how to achieve Fundamental Wellbeing. The people who he studied tend to have achieved it via some spiritual path, but it sounds like there was typically a good deal of luck involved. Martin has developed an allegedly more reliable path, available at FindersCourse.com, but that requires a rather inflexible commitment to a time-consuming schedule, and a fair amount of money.

Should I want to experience Fundamental Wellbeing?

Most people who experience it show a clear preference for remaining in that state. That’s a clear medium strength reason to suspect that I should want it, and it’s hard to see any counter to that argument.

The weak version of Fundamental Wellbeing that I’ve experienced tends to confirm that conclusion, although I see signs that some aspects require continuing attention to maintain, and the time required to do so sometimes seems large compared to the benefits.

Martin briefly discusses people who experienced Fundamental Wellbeing, and then rejected it. It reminds me of my reaction to an SSRI – it felt like I got a nice vacation, but vacation wasn’t what I wanted, since it conflicted with some of my goals for achieving life satisfaction. Those who reject Fundamental Wellbeing disliked the lack of agency and emotion (I think this refers only to some of the harder-to-achieve versions of Fundamental Wellbeing). That sounds like it overlaps a fair amount with what I experienced on the SSRI.

Martin reports that some of the people he studied have unusual reactions to pain, feeling bliss under circumstances that appear to involve lots of pain. I can sort of see how this is a plausible extreme of the effects that I understand, but it still sounds pretty odd.

Will the world be better if more people achieve Fundamental Wellbeing?

The world would probably be somewhat better. Some people become more willing and able to help others when they reduce their own suffering. But that’s partly offset by people with Fundamental Wellbeing feeling less need to improve themselves, and feeling less bothered by the suffering of others. So the net effect is likely just a minor benefit.

I expect that even in the absence of people treating each other better, the reduced suffering that’s associated with Fundamental Wellbeing would mean that the world is a better place.

However, it’s tricky to determine how important that is. Martin mentions a clear case of a person who said he felt no stress, but exhibited many physical signs of being highly stressed. Is that better or worse than being conscious of stress? I think my answer is very context-dependent.

If it’s so great, why doesn’t everyone learn how to do it?

  • Achieving Fundamental Wellbeing often causes people to have diminished interest in interacting with other people. Only a modest fraction of people who experience it attempt to get others to do so.
  • I presume it has been somewhat hard to understand how to achieve Fundamental Wellbeing, and why we should think it’s valuable.
  • The benefits are somewhat difficult to observe, and there are sometimes visible drawbacks. E.g. one anecdote of a manager who became more generous with his company’s resources – that was likely good for some people, but likely at some cost to the company and/or his career.

Conclusion

The ideas in this book deserve to be more widely known.

I’m unsure whether that means lots of people should read this book. Maybe it’s more important just to repeat simple summaries of the book, and to practice more meditation.

[Note: I read a pre-publication copy that was distributed at the Transformative Technology conference.]

Book review: The Longevity Diet: Discover the New Science Behind Stem Cell Activation and Regeneration to Slow Aging, Fight Disease, and Optimize Weight, by Valter Longo.

Longo is a moderately competent researcher whose ideas about nutrition and fasting are mostly heading in the right general direction, but many of his details look suspicious.

He convinced me to become more serious about occasional, longer fasts, but I probably won’t use his products.
Continue Reading

Food Delivery Reviews

New food delivery services are springing up like weeds.

Hopes

I’m primarily interested now in a substitute for restaurants. As I currently use restaurants, they provide variety in my food, but aren’t particularly convenient or healthy. Restaurant delivery services have been improving, but the user interfaces for ordering still seem clumsy and primitive (few restaurants seem to care enough to interface well with delivery services, and even fewer restaurants have both healthy food and adequate nutritional labeling).
Continue Reading

Book review: Tripping over the Truth: the Return of the Metabolic Theory of Cancer Illuminates a New and Hopeful Path to a Cure, by Travis Christofferson.

This book is mostly a history of cancer research, focusing on competing grand theories, and the treatments suggested by the author’s preferred theory. That’s a simple theory where the prime cause of cancer is a switch to fermentation (known as the metabolic theory, or the Warburg hypothesis).

He describes in detail two promising treatments that were inspired by this theory: a drug based on 3-bromopyruvate (3BP), and a ketogenic diet.

Continue Reading

Book(?) review: Microbial Burden: A Major Cause Of Aging And Age-Related Disease, by Michael Lustgarten.

This minibook has highly variable quality.

Lustgarten demonstrates clear associations between microbes and aging. That’s hardly newsworthy.

He’s much less clear when he switches to talking about causality. He says microbes are the root cause of aging, and occasionally provides weak evidence to support that.

I still have plenty of reason to suspect that much of those associations are due to frailty and declining immune systems, which let microbes take over more. Lustgarten doesn’t make the kind of argument that would convince me that the microbe –> senility causal path is more important than the senility –> microbe causal path.

He has a decent amount of practical advice that is likely to be quite healthy even if he’s wrong about the root cause of aging, including: eat lots of leaves, green peppers, mushrooms, and use low pH soap.

One confusing recommendation is to limit our protein intake to moderate levels.

He provides a nice graph of mortality as a function of BUN (see here for more evidence about BUN), which hints that we should reduce BUN by reducing protein intake.

He also notes that methionine restriction has significant evidence behind it, and methionine restriction requires restricting protein, especially animal proteins.

Yet I see some suggestions that protein (methionine) restriction is likely only helpful in people with kidney disease.

My impression is that high BUN mostly indicates poor health when it’s caused by kidney problems, and doesn’t provide much reason for reducing protein consumption, and least in people with healthy kidneys.

Lustgarten has since blogged about evidence (see the 7/11/2018 update) that higher protein intake helps reduce his homocysteine.

I have also noticed a (noisy) negative correlation between my protein consumption and my homocysteine levels. But that might be due to riboflavin – when I reduce my protein intake, I also reduce my riboflavin intake, since crickets are an important source of riboflavin for me. So I want to do more research into dietary protein before deciding to reduce it.

The book is too quick to dive into technical references, with limited descriptions of why they’re relevant. In many cases, I decided they provided only marginal support to his important points.

Read his blog before deciding whether to read the minibook. The blog focuses more on quantified-self-style reporting, and less on promoting a grand theory.

Book review: Principles: Life and Work, by Ray Dalio.

Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.

Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.

Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.

Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.

Advice

Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.

Key ideas include:

  • be honest with yourself
  • be open-minded
  • focus on identifying and fixing your most important weaknesses

Continue Reading

Eric Drexler has published a book-length paper on AI risk, describing an approach that he calls Comprehensive AI Services (CAIS).

His primary goal seems to be reframing AI risk discussions to use a rather different paradigm than the one that Nick Bostrom and Eliezer Yudkowsky have been promoting. (There isn’t yet any paradigm that’s widely accepted, so this isn’t a Kuhnian paradigm shift; it’s better characterized as an amorphous field that is struggling to establish its first paradigm). Dueling paradigms seems to be the best that the AI safety field can manage to achieve for now.

I’ll start by mentioning some important claims that Drexler doesn’t dispute:

  • an intelligence explosion might happen somewhat suddenly, in the fairly near future;
  • it’s hard to reliably align an AI’s values with human values;
  • recursive self-improvement, as imagined by Bostrom / Yudkowsky, would pose significant dangers.

Drexler likely disagrees about some of the claims made by Bostrom / Yudkowsky on those points, but he shares enough of their concerns about them that those disagreements don’t explain why Drexler approaches AI safety differently. (Drexler is more cautious than most writers about making any predictions concerning these three claims).

CAIS isn’t a full solution to AI risks. Instead, it’s better thought of as an attempt to reduce the risk of world conquest by the first AGI that reaches some threshold, preserve existing corrigibility somewhat past human-level AI, and postpone need for a permanent solution until we have more intelligence.

Continue Reading

The point of this blog post feels almost too obvious to be worth saying, yet I doubt that it’s widely followed.

People often avoid doing projects that have a low probability of success, even when the expected value is high. To counter this bias, I recommend that you mentally combine many such projects into a strategy of trying new things, and evaluate the strategy’s probability of success.

1.

Eliezer says in On Doing the Improbable:

I’ve noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they’re doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work. … But you can’t get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. “It might not work” is enough to force them to make a great effort to continue past that 30% failure probability. It’s not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.

I expect this reluctance to work on projects with a large chance of failure is a widespread problem for individual self-improvement experiments.

2.

One piece of advice I got from my CFAR workshop was to try lots of things. Their reasoning involved the expectation that we’d repeat the things that worked, and forget the things that didn’t work.

I’ve been hesitant to apply this advice to things that feel unlikely to work, and I expect other people have similar reluctance.

The relevant kind of “things” are experiments that cost maybe 10 to 100 hours to try, which don’t risk much other than wasting time, and for which I should expect on the order of a 10% chance of noticeable long-term benefits.

Here are some examples of the kind of experiments I have in mind:

  • gratitude journal
  • morning pages
  • meditation
  • vitamin D supplements
  • folate supplements
  • a low carb diet
  • the Plant Paradox diet
  • an anti-anxiety drug
  • ashwaghanda
  • whole fruit coffee extract
  • piracetam
  • phenibut
  • modafinil
  • a circling workshop
  • Auditory Integration Training
  • various self-help books
  • yoga
  • sensory deprivation chamber

I’ve cheated slightly, by being more likely to add something to this list if it worked for me than if it was a failure that I’d rather forget. So my success rate with these was around 50%.

The simple practice of forgetting about the failures and mostly repeating the successes is almost enough to cause the net value of these experiments to be positive. More importantly, I kept the costs of these experiments low, so the benefits of the top few outweighed the costs of the failures by a large factor.

3.

I face a similar situation when I’m investing.

The probability that I’ll make any profit on a given investment is close to 50%, and the probability of beating the market on a given investment is lower. I don’t calculate actual numbers for that, because doing so would be more likely to bias me than to help me.

I would find it rather discouraging to evaluate each investment separately. Doing so would focus my attention on the fact that any individual result is indistinguishable from luck.

Instead, I focus my evaluations much more on bundles of hundreds of trades, often associated with a particular strategy. Aggregating evidence in that manner smooths out the good and bad luck to make my skill (or lack thereof) more conspicuous. I’m focusing in this post not on the logical interpretation of evidence, but on how the subconscious parts of my mind react. This mental bundling of tasks is particularly important for my subconscious impressions of whether I’m being productive.

I believe this is a well-known insight (possibly from poker?), but I can’t figure out where I’ve seen it described.

I’ve partly applied this approach to self-improvement tasks (not quite as explicitly as I ought to), and it has probably helped.

Book review: Time Biases: A Theory of Rational Planning and Personal Persistence, by Meghan Sullivan.

I was very unsure about whether this book would be worth reading, as it could easily have been focused on complaints about behavior that experts have long known are mistaken.

I was pleasantly surprised when it quickly got to some of the really hard questions, and was thoughtful about what questions deserved attention. I disagree with enough of Sullivan’s premises that I have significant disagreements with her conclusions. Yet her reasoning is usually good enough that I’m unsure what to make of our disagreements – they’re typically due to differences of intuition that she admits are controversial.

I had hoped for some discussion of ethics (e.g. what discount rate to use in evaluating climate change), whereas the book focuses purely on prudential rationality (i.e. what’s rational for a self-interested person). Still, the discussion of prudential rationality covers most of the issues that make the ethical choices hard.

Personal identity

A key issue is the nature of personal identity – does one’s identity change over time?

Continue Reading

Descriptions of AI-relevant ontological crises typically choose examples where it seems moderately obvious how humans would want to resolve the crises. I describe here a scenario where I don’t know how I would want to resolve the crisis.

I will incidentally ridicule express distate for some philosophical beliefs.

Suppose a powerful AI is programmed to have an ethical system with a version of the person-affecting view. A version which says only persons who exist are morally relevant, and “exist” only refers to the present time. [Note that the most sophisticated advocates of the person-affecting view are willing to treat future people as real, and only object to comparing those people to other possible futures where those people don’t exist.]

Suppose also that it is programmed by someone who thinks in Newtonian models. Then something happens which prevents the programmer from correcting any flaws in the AI. (For simplicity, I’ll say programmer dies, and the AI was programmed to only accept changes to its ethical system from the programmer).

What happens when the AI tries to make ethical decisions about people in distant galaxies (hereinafter “distant people”) using a model of the universe that works like relativity?

Continue Reading