willpower

All posts tagged willpower

[Caveat: this post involves abstract theorizing whose relevance to practical advice is unclear. ]

What we call willpower mostly derives from conflicts between parts of our minds, often over what discount rate to use.

An additional source of willpower-like conflicts comes from social desirability biases.

I model the mind as having many mental sub-agents, each focused on a fairly narrow goal. Different goals produce different preferences for caring about the distant future versus caring only about the near future.

The sub-agents typically are as smart and sophisticated as a three year old (probably with lots of variation). E.g. my hunger-minimizing sub-agent is willing to accept calorie restriction days with few complaints now that I have a reliable pattern of respecting the hunger-minimizing sub-agent the next day, but complained impatiently when calorie restriction days seemed abnormal.

We have beliefs about how safe we are from near-term dangers, often reflected in changes to the autonomic nervous system (causing relaxation or the fight or flight reflex). Those changes cause quick, crude shifts in something resembling a global discount rate. In addition, each sub-agent has some ability to demand that it’s goals be treated fairly.

We neglect sub-agents whose goals are most long-term when many sub-agents say their goals have been neglected, and/or when the autonomic nervous system says immediate problems deserve attention.

Our willpower is high when we feel safe and are satisfied with our progress at short-term goals.

Social status

The time-discounting effects are sometimes obscured by social signaling.

Writing a will hints at health problems, whereas doing something about global warming can signal wealth. We have sub-agents that steer us to signal health and wealth, but without doing so in a deliberate enough way that people see that we are signaling. That leads us to exaggerate how much of our failure to write a will is due to the time-discounting type of low willpower.

Video games convince parts of our minds that we’re gaining status (in a virtual society) and/or training to win status-related games in real life. That satisfies some sub-agents who care about status. (Video games deceive us about status effects, but that has limited relevance to this post.) Yet as with most play, we suppress awareness of the zero-sum competitions we’re aiming to win. So we get confused about whether we’re being short-sighted here, because we’re pursuing somewhat long-term benefits, probably deceiving ourselves somewhat about them, and pretending not to care about them.

Time asymmetry?

Why do we feel an asymmetry in effects of neglecting distant goals versus neglecting immediate goals?

The fairness to sub-agents metaphor suggests that neglecting the distant future ought to produce emotional reactions comparable to what happens when we neglect the near future.

Neglecting the distant future does produce some discomfort that somewhat resembles willpower problems. If I spend lots of time watching TV, I end up feeling declining life-satisfaction, which tends to eventually cause me to pay more attention to long-term goals.

But the relevant emotions still don’t seem symmetrical.

One reason for asymmetry is that different goals imply different things for what constitutes neglecting a goal: neglecting sleep or food for a day implies something more unfair to the relevant sub-agents than does neglecting one’s career skills.

Another reason is that for both time-preference and social desirability conflicts, we have instincts that aren’t optimized for our current environment.

Our hunter-gatherer ancestors needed to devote most of their time to tasks that paid off within days, and didn’t know how to devote more than a few percent of their time to usefully preparing for events that were several years in the future. Our farmer ancestors needed to devote more time to 3-12 month planning horizons, but not much more than hunter-gatherers did. Today many of us can productively spend large fractions of our time on tasks (such as getting a college degree) that take more than 5 years to pay off. Social desirability biases show (less clear) versions of that same pattern.

That means we need to override our system 1 level heuristics with system 2 level analysis. That requires overriding the instinctive beliefs of some sub-agents about how much attention their goals deserve. Whereas the long-term goals we override to deal with hunger have less firmly established “rights” to fairness.

Also, there may be some fairness rules about how often system 2 can override system 1 agents – doing that too often may cause coalitions within system 1 to treat system 2 as a politician who has grabbed too much power. [Does this explain decision fatigue? I’m unsure.]

Other Models of Willpower

The depletion model

Willpower depletion captures a nontrivial effect of key sub-agents rebelling when their goals have been overlooked for too long.

But I’m confused – the depletion model doesn’t seem like it’s trying to be a complete model of willpower. In particular, it either isn’t trying explain evolutionary sources of willpower problems, or is trying to explain it via the clearly inadequate claim that willpower is a simple function of current blood glucose levels.

It would be fine if the depletion model were just a heuristic that helped us develop more willpower. But if anything it seems more likely to reduce willpower.

Kurzban’s opportunity costs model

Kurzban et al. have a model involving the opportunity costs of using cognitive resources for a given task.

It seems more realistic than most models I’ve seen. It describes some important mental phenomena more clearly than I can, but doesn’t quite seem to be about willpower. In particular, it seems uninformative about differing time horizons. Also, it focuses on cognitive resource constraints, whereas I’d expect some non-cognitive resource constraints to be equally important.

Ainslie’s Breakdown of Will

George Ainslie wrote a lot about willpower, describing it as intertemporal bargaining, with hyperbolic discounting. I read that book 6 years ago, but don’t remember it very clearly, and I don’t recall how much it influenced my current beliefs. I think my model looks a good deal like what I’d get if I had set out to combine the best parts of Ainslie’s ideas and Kurzban’s ideas, but I wrote 90% of this post before remembering that Ainslie’s book was relevant.

Ainslie apparently wrote his book before it became popular to generate simple models of willpower, so he didn’t put much thought into comparing his views to others.

Hyperbolic discounting seems to be a real phenomenon that would be sufficient to cause willpower-like conflicts. But I’m unclear on why it should be a prominent part of a willpower model.

Distractible

This “model” isn’t designed to say much beyond pointing out that willpower doesn’t reliably get depleted.

Hot/cool

A Hot/cool-system model sounds like an attempt to generalize the effects of the autonomic nervous system to explain all of willpower. I haven’t found it to be very informative.

Muscle

Some say that willpower works like a muscle, in that using it strengthens it.

My model implies that we should expect this result when preparing for the longer-term future causes our future self to be safer and/or to more easily satisfy near-term goals.

I expect this effect to be somewhat observable with using willpower to save money, because having more money makes us feel safer and better able to satisfy our goals.

I expect this effect to be mostly absent after using willpower to loose weight or to write a will, since those produce benefits which are less intuitive and less observable.

Why do drugs affect willpower?

Scott at SlateStarCodex asks why drugs have important effects on willpower.

Many drugs affect the autonomic nervous system, thereby influencing our time preferences. I’d certainly expect that drugs which reduce anxiety will enable us to give higher priority to far future goals.

I expect stimulants make us feel less concern about depleting our available calories, and less concern about our need for sleep, thereby satisfying a few short-term sub-agents. I expect this to cause small increases in willpower.

But this is probably incomplete. I suspect the effect of SSRIs on willpower varies quite widely between people. I suspect that’s due to an anti-anxiety effect which increases willpower, plus an anti-obsession effect which reduces willpower in a way that my model doesn’t explain.

And Scott implies that some drugs have larger effects on willpower than I can explain.

My model implies that placebos can be mildly effective at increasing willpower, by convincing some short-sighted sub-agents that resources are being applied toward their goals. A quick search suggests this prediction has been poorly studied so far, with one low-quality study confirming this.

Conclusion

I’m more puzzled than usual about whether these ideas are valuable. Is this model profound, or too obvious to matter?

I presume part of the answer is that people who care about improving willpower care less about theory, and focus on creating heuristics that are easy to apply.

CFAR does a decent job of helping people develop more willpower, not by explaining a clear theory of what willpower is, but by focusing more on how to resolve conflicts between sub-agents.

And I recommend that most people start with practical advice, such as the advice in The Willpower Instinct, and worry about theory later.

I started writing morning pages a few months ago. That means writing three pages, on paper, before doing anything else [1].

I’ve only been doing this on weekends and holidays, because on weekdays I feel a need to do some stock market work close to when the market opens.

It typically takes me one hour to write three pages. At first, it felt like I needed 75 minutes but wanted to finish faster. After a few weeks, it felt like I could finish in about 50 minutes when I was in a hurry, but often preferred to take more than an hour.

That suggests I’m doing much less stream-of-consciousness writing than is typical for morning pages. It’s unclear whether that matters.

It feels like devoting an hour per day to morning pages ought to be costly. Yet I never observed it crowding out anything I valued (except maybe once or twice when I woke up before getting an optimal amount of sleep in order to get to a hike on time – that was due to scheduling problems, not due to morning pages reducing the available of time per day).
Continue Reading

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

I use Beeminder occasionally. The site’s emails normally suffice to bug me into accomplishing whatever I’ve committed to doing. But I only use it for a few tasks for which my motivation is marginal. Most of the times that I consider using Beeminder, I either figure out how to motivate myself properly, or (more often) decide that my goal isn’t important.

The real value of Beeminder is that if I want to compel future-me to do something, I can’t give up by using the excuse that future-me is lazy or unreliable. Instead, I find myself wondering why I’m unwilling to risk $X to make myself likely to complete the task. That typically causes me to notice legitimate doubts about how highly I value the result.

My alternate day calorie restriction diet is going well. My body and/or habits are adapting. But the visible benefits are still small.

  • I normally do three restricted days per week (very rarely only two). I eat 800-1000 calories on those days (or 1200-1400 when I burn more than 1000 calories by hiking). On unrestricted days, I try to eat a little more than feels natural.
  • I have an improved ability to bring my weight to a particular target, but the range of weights that feel good is much narrower than I expected. My weight has stabilized to a range of 142-145 pounds, compared to 145-148 last year and an erratic 138-148 in the first few weeks of my new diet. If I reduce my weight below 142, I feel irritable in the afternoon or evening of a restricted day. At 145, I’m on the verge of that too-full feeling that was common in prior years.
  • My resting heart rate has declined from about 70 to about 65.
  • For many years I’ve been waking in the middle of the night feeling too warm, with little apparent pattern. A byproduct of my new diet is that I’ve noticed it’s connected to having eaten protein.
  • I’m using less willpower now than in prior years to eat the right amount. My understanding of the willpower effect is influenced by CFAR’s attitude, which is that occasionally using willpower to fight the goals of one of my mind’s sub-agents is reasonable, but the longer I continue it, the more power and thought that sub-agent will devote to accomplishing its goals. My sub-agent in charge of getting me to eat lots to prepare for a famine can now rely on me, if I’m resisting it today, to encourage it tomorrow; whereas in prior years I was continually pressuring it to do less than it wanted. That makes it more cooperative.

The only drawbacks are the increased attention I need to pay to what I eat on restricted days, and the difficulties of eating out on restricted days (due to my need to control portion sizes and to time my main meals near the middle of the day). I find it fairly easy to schedule my restricted days so that I’m almost always eating at home, but I expect many people to find that hard.

Alternate day calorie restriction seems to be one of the most effective ways of increasing my life expectancy, but it isn’t easy. I tried it about three years ago, but gave up because it interfered with my sleep. I started it again three weeks ago, and this time I seem to be adjusting to it.

One important difference is that this time I’m better informed about what it takes to adjust to the diet. I planned a strict induction phase of 7 down days (about 550 calories) and 7 up days (unlimited food), followed by a less strict pattern of 2-3 days per week on which I’m limited to around 1000 calories a day. (I ended up adding an extra up day after each of the first two down days, then switched to strict alternation for the remainder of the induction phase). The severity of the induction phase may be important at triggering adaptation to this kind of diet.

The second difference is that this time I’ve been obsessive about measuring my food intake to the nearest gram. I suspect that when I intended to eat 1200 calories a day in my prior attempt, I was actually getting at least 1400 calories and fooling myself into thinking I was following the diet. This time I’m using a good scale to weigh each serving.

After the first down day, I slept poorly (as expected), getting impatient for sunrise to bring me an excuse to get up for food. After about the fourth down day, waking with an empty stomach seemed normal enough that it doesn’t provide a motive to get out of bed, or to get food quickly when I do get out of bed. I hardly notice the feelings of hunger then, even though I ought to be hungrier than late in the previous day when I did notice some of the standard hunger feelings. My sleep isn’t quite back to normal, but it seems close to normal and improving.

I’ve been feeling full about 50% of the time. I felt noticeably hungry about 30% of the time at first, and now it’s more like 20% of the time. Hunger feels a bit less important now than it used to feel (i.e. it affects my attention less).

Weight loss wasn’t an important motive for changing my diet, but I hoped I would lose about 7 pounds. I lost at least 5 pounds by the end of the 5th down day (my weight fluctuated enough that it’s hard to evaluate it precisely). I couldn’t comfortably eat enough on the up days to make up for what I lost on the 550 calorie days, even when I became mildly alarmed at my rate of weight loss.

Then my weight rebounded within a few days, without any apparent change in my diet, to roughly what it was at the start. The obvious guess is that my metabolism slowed down to compensate for the reduced calories. I did feel noticeably colder in bed after down days. I also felt less mental energy, and when doing an easy hike on the day after the 6th down day I felt a need to take rest breaks that was unusual in that it wasn’t caused by anything like muscle fatigue.

During the induction phase, I practiced strict protein fasting (< 15 grams of protein per day) on down days due to guesses that protein restriction is more effective at causing beneficial metabolic changes, which might cause faster psychological adaptation. My results seem to provide weak evidence in support of this guess. My diet on down days was mostly sweet potato and lettuce, with modest amounts of other vegetables and sugar-free chocolate. This provided more bulk to fill my gut than is typical for this kind of diet, but that was likely offset by the lack of protein related satiety. I’m not restricting protein now that I’m out of the induction phase (although I expect to do so maybe once a month).

My heart rate variability mysteriously increased after the first down day, then declined to a much lower than average level after the fourth down day, and has fluctuated a lot since then (averaging somewhat below normal).

Why did I have enough willpower to get this far, when I probably didn’t have the willpower needed to do it right three years ago?

One factor is that I now consider the CFAR community to be an important tribe to belong to, so my sense of self-identity has changed to attach more importance to being able to make big changes to my life.

Another factor is having information that led me to be somewhat confident that by a specific, not too distant, date it would become a good deal easier.

A third factor is being more obsessive about measuring how well I was complying with the rules I set down.

The induction phase cost me a fair amount of productivity. For 17 days I wasn’t close to having enough willpower/ambition to start writing a blog post (and had similar problems with most other non-routine tasks). But now I feel that writing this post is easier than normal. It’s too early to tell whether that means I have more mental energy than before.

I don’t know how to get strong evidence about whether it is worth the effort. I seem to feel more self-efficacy. I now think I can set my weight to any reasonable target simply by changing my calorie target on 2 or 3 down days per week. But in order to be clearly worthwhile it needs to improve my long-term health. I won’t know that for quite a while.

Book review: The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do To Get More of It, by Kelly McGonigal.

This book starts out seeming to belabor ideas that seem obvious to me, but before too long it offers counterintuitive approaches that I ought to try.

The approach that I find hardest to reconcile with my intuition is that self-forgiveness over giving into temptations helps increase willpower, while feeling guilt or shame about having failed reduces willpower, so what seems like an incentive to avoid temptation is likely to reduce our ability to resist the temptation.

Another important but counterintuitive claim is that trying to suppress thoughts about a temptation (e.g. candy) makes it harder to resist the temptation. Whereas accepting that part of my mind wants candy (while remembering that I ought to follow a rule of eating less candy) makes it easier for me to resist the candy.

A careless author could have failed to convince me this is plausible. But McGonigal points out the similarities to trying to follow an instruction to not think of white bears – how could I suppress thoughts of white bears of some part of my mind didn’t activate a concept of white bears to monitor my compliance with the instruction? Can I think of candy without attracting the attention of the candy-liking parts of my mind?

As a result of reading the book, I have started paying attention to whether the pleasure I feel when playing computer games lives up to the anticipation I feel when I’m tempted to start one. I haven’t been surprised to observe that I sometimes feel no pleasure after starting the game. But it now seems easier to remember those times of pleasureless playing, and I expect that is weakening my anticipation or rewards.

Book review: Drive: The Surprising Truth About What Motivates Us, by Daniel H. Pink.

This book explores some of the complexities of what motivates humans. It attacks a stereotype that says only financial rewards matter, and exaggerates the extent to which people adopt that fallacy. His style is similar to Malcolm Gladwell’s, but with more substance than Gladwell.

The book’s advice is likely to cause some improvement in how businesses are run and in how people choose careers. But I wonder how many bosses will ignore it because their desire to exert control over people outweighs their desire to create successful companies.

I’m not satisfied with the way he and others classify motivations as intrinsic and extrinsic. While feelings of flow may be almost entirely internally generated, other motivations that he classifies as intrinsic seem to involve an important component of feeling that others are rewarding you with higher status/reputation.

Shirking may have been a been an important problem a century ago for which financial rewards were appropriate solutions, but the nature of work has changed so that it’s much less common for workers to want to put less effort into a job. The author implies that this means standard financial rewards have become fairly unimportant factors in determining productivity. I think he underestimates the importance they play in determining how goals are prioritized.

He believes the changes in work that reduced the importance of financial incentives was the replacement of rule-following routine work with work that requires creativity. I suggest that another factor was that in 1900, work often required muscle-power that consumed almost as much energy as a worker could afford to feed himself.

He states his claims vaguely enough that they could be interpreted as implying that broad categories of financial incentives (including stock options and equity) work poorly. I checked one of the references that sounded like it might address that (“When performance-related pay backfires”), and found it only dealt with payments for completing specific tasks.

His complaints about excessive focus on quarterly earnings probably have some value, but it’s important to remember that it’s easy to err in the other direction as well (the dot-com bubble seemed to coincide with an unusual amount of effort at focusing on earnings 5 to 10 years away).

I’m disappointed that he advises not to encourage workers to compete against each other without offering evidence about its effects.

One interesting story is the bonus system at Kimley-Horn and Associates, where any employee can award another employee $50 for doing something exceptional. I’d be interested in more tests of this – is there something special about Kimley-Horn that prevents abuse, or would it work in most companies?

Book review: Breakdown of Will, by George Ainslie.

This book analyzes will, mainly problems connected with willpower, as a form of intertemporal bargaining between a current self that highly values immediate temptation and future selves who prefer that current choices be more far-sighted. He contrasts simple models of rational agents who exponentially discount future utility with his more sophisticated and complex model of people whose natural discount curve is hyperbolic. Hyperbolic discounting causes time-inconsistent preferences, resulting in problems such as addiction. Intertemporal bargains can generate rules which bundle rewards to produce behavior more closely approximating the more consistent exponential discount model.

He also discusses problems associated with habituation to rewards, and strategies that can be used to preserve an appetite for common rewards. For example, gambling might sometimes be rational if losing money that way restores an appetite for acquiring wealth.

Some interesting ideas mentioned are that timidity can be an addiction, and that pain involves some immediate short-lived reward (to draw attention) in addition to the more obvious negative effects.

For someone who already knows a fair amount about psychology, only small parts of the book will be surprising, but most parts will help you think a bit clearer about a broad range of problems.

Book review: Mindless Eating: Why We Eat More Than We Think by Brian Wansink.
This well-written book might help a few people lose a significant amount of weight, and many to lose a tiny bit.
Some of his advice seems to demand as much willpower for me as a typical diet (e.g. eat slowly), but he gives many small suggestions and advises us to pick and choose the most appropriate ones. There’s enough variety and novelty among his suggestions that most people are likely to find at least one feasible method to lose a few pounds.
A large fraction of his suggestions require none of the willpower that a typical diet requires, but will be rejected by most people because their ego will cause them to insist that only people less rational than them are making the kind of mistakes that the book’s suggestions will fix.
Most of the book’s claims seem to be backed up by careful research. But I couldn’t find any research to back up the claim that approaches which cause people to eat 100 calories per day less for days will cause people to lose 10 pounds in ten months. He presents evidence that such a diet doesn’t need to make people feel deprived over the short time periods they’ve been studied. But there’s been speculation among critics of diet books that our bodies have a natural “set point” weight, and diets which work for a while have no long-term effect because lower body weights cause increased desire to return to the set point. This book offers only weak anecdotal evidence against that possibility.
But even if it fails as a diet book, it may help you understand how the taste of your food is affected by factors other than the food itself.