389 Comments
Comment removed
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

> I looked for the full text of Galenter and Pliner, but could not find it. I was however able to find the first two pages, including the abstract.

Your mistake was probably checking LG/SH for the paper itself. But in this case, SH has failed to associate the chapters with paper entries, even though it has the book: http://libgen.rs/book/index.php?md5=4B42C4388546A9A6A706368EE8AFA063 Anyway, here's just the paper: https://www.gwern.net/docs/statistics/decision/1974-galanter.pdf

Expand full comment

On George Floyd and the Identifiable Victim Effect: I think a huge amount of the difference was external pressure, specifically the effects of COVID-19 lockdowns, and simmering discontent over Breonna Taylor's death, which shortly preceded US lockdowns.

Some of it is also bias in the people who are inclined to protest. There are notable protests against something less than a tenth of a percent of all murders in the US. Vilfredo Pareto is rolling in his grave.

Expand full comment

There's another effect you didn't mention, too. You mentioned how people generally refuse an offer where someone offers to flip a coin and you get $60 if heads and lose $40 if tails.

If a stranger offered me that deal, I'd very reasonably assume that the coin was weighted to be very likely to land tails, or one of those coins with tails on both sides or something.

I assume there's a slight bias toward this kind of thing that people have learned from untrustworthy people. "What? You want to give me something that you say is equal value? Now I suspect that what you offer is worse or what I have is better than I think, so I'll keep what I have".

Expand full comment

Some great points here. In particular:

> I understand why some people would summarize this paper as “loss aversion doesn’t exist”. But it’s very different from “power posing doesn’t exist” or “stereotype threat doesn’t exist”, where it was found that the effect people were trying to study just didn’t happen, and all the studies saying it did were because of p-hacking or publication bias or something. People are very often averse to losses. This paper just argues that this isn’t caused by a specific “loss aversion” force. It’s caused by other forces which are not exactly loss aversion. We could compare it to centrifugal force in physics: real, but not fundamental.

As you note, it's very important to distinguish the effect itself from the theoretical mechanism we think is underlying the effect. It's of course possible (and I think likely) that the long list of "cognitive biases" will be compressed into a smaller set of principles, but that's a distinct claim from saying the effect(s) used to posit those biases in the first place doesn't exist or doesn't replicate.

Re: the "size" of nudge effects, it always comes back to what your baseline is––and even what the null hypothesis is. While I think it's important to be careful about not overstating any effect, I also worry that sometimes effects are dismissed for being too "small". From a theoretical perspective: if a consensus model says the effect shouldn't exist at all, then any effect size is interesting and important and potentially disconfirms that model (upon replication, etc.). And from an applied perspective: yes, a nudge is no substitute for actually designing a good product, and it's entirely possible for companies to overspend on small nudges––but nudges can still be a useful tool in the toolbox.

To your point, I guess I'd just urge those nudges to be grounded in generalizable principles where possible, and perhaps there's a dearth of carefully articulated, quantitative theoretical models in the social sciences.

Expand full comment

The thing about the disutility exponent was weird and not clearly related to loss aversion. I think what they are saying is that if you gain x dollars you seem to value it at a utility of a*x^b, and if you lose x dollars you consider it a disutility of c*x^d (for some constants, a,b,c,d). The claim that they make (if I understand correctly) is that d > b.

So is this loss aversion? Well, would you get more utility from gaining x dollars than you would get disutility from losing x dollars? Well it depends on what x is (and on the exact values of a,b,c,d). What you know from d > b is that if x is sufficiently large, the loss disutility is bigger than the gain utility and that if x is sufficiently small, the gain utility is greater than the loss disutility.

Is this loss aversion? Not really.

Expand full comment

One difference between Floyd and the other 10 unarmed black guys per year is that we have full video of Floyd not being threatening, and a long time period where things could have been resolved in some other manner. For the others, it's easy for us to be skeptical and wonder, were they reaching for their pockets? Did the police know them from other incidents? Were they wearing gang regalia? Etc.

And of course when you see it all on video, it is natural to scale things up in your mind, wondering how many of the other incidents were similar, but didn't get filmed for whatever reason.

Expand full comment

Well, I sure hope you cherry-picked those quotes from Hreha, because the collective impression I get from them is that he's an axe-grinding narcissist. I'm interested in this stuff, but the obnoxious tone of the quotes has persuaded me I can think of a better use of the next 10 minutes. What a putz.

Expand full comment

Don't you mean... "Bhrehavioral Economics"? He-yo!

Expand full comment

I think George Floyd to Mary is apples to oranges. Mary is a picture of a probably fictional woman, and her problem is that she's struggling with money. George Floyd was a 10-minute video of a real person, and the problem is that he was murdered by someone who would probably have gone free without all the media attention and protests. I'd also add that the George Floyd incident happened at a strange time in the US: it played into an ongoing story about the relationship between the police and black people, and also a ton more people had suddenly found themselves with free time to go to protests and stuff.

Maybe a better comparison would be: if you watch a short video following a day in the life of a single mother, are you more likely to donate than if you watched a lecture about poverty and single-motherhood? I bet you're at least more likely to have an emotional response. But at that point, is it still the identifiable victim effect? It might be comparable to Hrera's claim about loss aversion: it's real when you turn up the volume knob, but not at "nudging" levels.

Expand full comment

Tentatively on the identifiable victim: Maybe inspiring help isn't the same thing as building anger?

The Innocence Project, which helps falsely convicted people get out of prison, doesn't get nearly as much attention as BLM, even though the Innocence Project is helping specific individuals.

https://innocenceproject.org/

Expand full comment

I think your intuitive reluctance (and your medical school friends) to guess on your test is probably founded in what Ole Peters has been talking about now with ergodicity economics for a while now (individual actors don't experience the average across time).

https://www.bloomberg.com/news/articles/2020-12-11/everything-we-ve-learned-about-modern-economic-theory-is-wrong

Expand full comment

Very well said -- my feeling pre-Ariely was that behavioral economics was a much more rigorous field than social psychology and I agree that this continues to seems like the case.

Nudges though! My feeling about nudges *as a policy project* rather than as a research project is that they were an intriguing but failed idea. Retrospectively they are a strange thing for governments to focus on. Can nudges help you maintain a park or build a railway or fight a war or administer a social insurance program or do any of the other "big" things that government does? Clearly not! So why were some of the smartest people in US government 15 years ago so focused on nudges? Maybe because, in the US, the real problems were seen as unfixable and so nudging was the best you could do...and so nudges were overstated to the point where they would not just make incremental benefits but solve real problems.

In this sense, I would argue that even if a 1% gain is a big deal in absolute terms, they are a bad use of a scarce resource (political will and executive initiative) that in a sane world would be invested in much higher-ROI projects.

Expand full comment

My explanation for Hurricane Floyd is that it was caused by an assumption from the media that Trump was toast after he mishandled coronavirus and they could let it out into full flower.

Expand full comment

My perception of my own loss aversion is that it's not so much losses that I'm averse to, but situations where I feel like an idiot. Losing even a small amount of money will bother me, if I lost it by making a silly decision.

I don't enjoy gambling, because I know that I'll get more displeasure from losing $100 gambling ("I'm such an idiot, why did I gamble?" than for winning $100 by gambling ("whoop de freakin' doo, a hundred bucks, not exactly life-changing money, is it?")

But I know that there are some people for whom this is reversed; the pleasure of winning $100 is more significant than the displeasure of losing $100, and these are the people you'll find filling up casinos. Not all gamblers are idiots who don't understand probability, some of them just have a mildly broken sense of loss aversion.

Expand full comment

Prospect Theory is touted as superior to Expected Utility Theory, and it has more parameters and so can fit data better within any given sample. But Prospect Theory parameter estimates even within subject also don't generalize across choice situations. Estimate them in one experiment, put the subject in a different experiment, and you will get different estimates. One explanation for this problem is here: file:///Users/apple/Downloads/Stewart_Canic_Mullett_2020.pdf

Expand full comment

I'm out on behavior economics. It's easy to make an experiment that shows how dumb and irrational people are for not trading away their old coffee mug for five dollars when it's "worth" less than that. In the real world simple heuristics (it ain't broke don't fix it / buy a new one that might not be as good) are very effective and seem to add up to something approximating rational behavior.

For example it's well known that building / expanding roads doesn't decrease traffic congestion - more people drive until the level of congestion has reached an equilibrium. Things like that seem better modeled by rational behavior than anything from behavioral economics.

Expand full comment

Gerd Gigerenzer provides a better explanation than Kahneman and Tversky.

https://en.wikipedia.org/wiki/Gerd_Gigerenzer

"Gigerenzer investigates how humans make inferences about their world with limited time and knowledge. He proposes that, in an uncertain world, probability theory is not sufficient; people also use smart heuristics, that is, rules of thumb. He conceptualizes rational decisions in terms of the adaptive toolbox (the repertoire of heuristics an individual or institution has) and the ability to choose a good heuristics for the task at hand. A heuristic is called ecologically rational to the degree that it is adapted to the structure of an environment. "

He's mentioned in "The Undoing Project" by Michael Lewis. Apparently Kahneman and Tversky hated him. No wonder. He provides a simpler but far less dramatic explanation than they do.

His books are well worth a read.

What's described in the article above shows that there are more complex heuristics for loss aversion. It's not that people are illogical, but instead that they are working in a world of constant uncertainty and you know more about what you actually have.

Expand full comment

Would you consider doing a piece that highlights some meaningful good science?

I’m sorta at the point where my trust in academics is at an all time low. It might be helpful to signal boost ethical people doing good work.

I remember at one point I had to stop reading Andrew Gelman’s blog because it’s just so demoralizing hearing how my tax dollars are fueling these petty narcissists to propagate lies.

Expand full comment

It's worth observing that George Floyd's death was videotaped and VERY extreme. You can imagine a police officer, in the heat of the moment, accidentally shooting a man he believed was a threat, making an honest mistake. You can't imagine kneeling on somebody's neck for several minutes while he begs for his mother to be an honest mistake. Comparing that to some bad experimental charity ad is sort of like comparing The Godfather to a home movie I made in elementary school. One is going to have a bigger impact than the other.

Expand full comment

My 9th grade (I think) math teacher spent a bit of time with us on SAT prep. Back then (the late 80s) the SAT was all multiple choice. Right answers counted for 1 point and wrong answers counted for some fraction of a negative point. If you straight up guessed randomly for each question you'd do worse than simply not answering. But what my math teacher pointed out was that if you could eliminate 1 possible answer and _then_ guess, you'd overall increase your score, at least statistically. I took this to heart because it was mathematically obvious, but I wonder how many other students in my class did the same.

(Note that I might be misremembering the exact details. Maybe you had to eliminate two answers and then guess? It's been a while!)

Expand full comment

Forgive my ignorance, but isn't loss aversion just another way of saying that money has marginal utility? If I have $10k in the bank and need it for say a car down payment next month, then losing $10,000 is going to be way more painful than the gain of winning an additional $15k. Isn't that obvious? Haven't economists known about marginal utility since the marginal revolution or am I missing something?

As for the nudging example, I don't know why economists would be surprised to learn that incentives matter. If you reward people to do x, they're more likely to do x. Is that behavioral economics or just economics?

The work on framing of choices for e.g. how much to tip is interesting, although having worked in marketing, specifically in the role of trying to optimize websites for profitability, the idea of framing prices is very well known, although I don't know if marketers or economists figured it out first. There are many (infinite?) variables on a page and they can all have some effect on the rate of people who complete some action, and most tech companies with a lot of traffic have a formal A/B testing program to figure out the best possible configuration of elements on a page or in a sequence of pages.

I suppose that economists could point out various instances of "irrationality" in customer behavior, like Scott's tipping example, but sometimes the rational choice is to just follow the usual pattern and go with the flow.

This feels like a case of economists assuming (incorrectly) that people are only optimizing for money, which seems silly. When tipping, for example, part of the equation is "how much time should I spend thinking about this?" but also "what will the driver think of me if I tip x".

There's an additional factor, which is "how much is it culturally appropriate to tip for this kind of service?" A lot of people, I suspect, will take their cue on this last question from the options presented to them by an app, which in some cases can lead to some odd things like tipping the Uber driver more than the Grubhub driver. But I dunno, saying "the way options are presented to people will affect which option they choose" just seems like something that's rather obvious.

Expand full comment

>You didn’t need Kahneman and Tversky to tell you that people sometimes make irrational decisions...

I dunno, maybe I do.

It's always seemed to me that quite a lot of what people describe as irrationality can also be described as cases where heuristics that roughly approximate rationality happen to conflict with more careful reasoning- like how loss aversion of large sums roughly approximates Kelly betting, or how that famous study about people deferring to an apparent group consensus about which of two lines was longer roughly approximates outside view reasoning.

It also seems to me that the question of when a person should rely on these heuristics may be more complicated then just "use careful reasoning when precision is required and time allows, and rely on heuristics otherwise". When a lot of people rely on the same heuristics, it's pretty easy to see what sort of risks those heuristics entail- but when you act on your explicit reasoning, you're inventing something new, and the risk of relying on that can be harder to judge. Sometimes, you know that a heuristic is particularly risky in a given situation, so ignoring that in favor of a reasoned alternative is the obvious choice. Other times, you can see that relying on a heuristic is very low-risk, so that while relying instead on careful reasoning might lead to a better outcome, doing so involves taking on more risk. An astronaut who follows a checklist is safer than one who invents their own procedures, even though the checklist is only a rough approximation of the ideal way to fly a spaceship.

So if it can sometimes be the correct instrumental choice to rely on a heuristic over reason, does it make sense to single out any negative consequences of that choice and say that they're the result of "irrationality"?

To some extent, of course, that's just a semantic question- but I think our ordinary use of the word can lead to real confusion. Person A might say "I've reasoned carefully about this situation, and my conclusion conflicts with your intuitive judgement, so I think your judgement is irrational." And then Person B might be like "I'm relying on an empirically very reliable rule of thumb, and I believe I have good reason to trust that more than your or my understanding of the particulars, so I think my judgement is not irrational." A and B might believe they have a disagreement, when in fact they're just each using the word "irrational" to describe different things.

So, that's the issue I have with "people sometimes make irrational decisions". I'm not entirely convinced that "irrational" as it's commonly understood is a natural category- it seems to conflate things as dissimilar as mistakes and the negative consequences of useful heuristics, and imply a non-existent common cause. I think we may need a completely different framework that would consign our current use of that word to the same bin as "phlogiston" and the four humors.

Expand full comment

Regarding George Floyd vs. Mary the Single Mother, the confounding factor could be media saturation. Mary the Single Mother is some anonymous woman from a stock photo. George Floyd is a character backed by seemingly endless media campaigns propagated by organizations with deep pockets and vast amounts of man-hours to spare. It's only natural that Mary would pale in comparison.

Expand full comment
founding

The suggested tipping on credit card transactions has become so insane, that I have switched to paying in cash.

Expand full comment

I would keep my Denver quarter and never exchange it for a Philadelphia quarter, but this is simply because Denver is a vastly higher-quality city than Philadelphia. If I had been given a Philadelphia quarter initially, I would've leapt at the chance to exchange it for a Denver quarter.

Expand full comment

> It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”.

This is a really weird example statistic to provide right after postulating that Michael Brown "got victimized".

> Somewhere there’s an answer to the George Floyd vs. Mary The Single Mother problem.

My take is basically that George Floyd was picked as the mascot for something that was happening anyway. His causal effect is something near zero - rather, people who were looking for something happened to find him.

Expand full comment

My explanation for George v. Mary: the Mary ad is "You should help her." The implied George ad is "you should help all possible future Georges." They don't seem at all commensurate.

Expand full comment

"G&R are happy to admit that in many, many cases, people behave in loss-averse ways, including most of the classic examples given by Kahneman and Tversky. They just think that this is because of other cognitive biases, not a specific cognitive bias called “loss aversion”. They especially emphasize Status Quo Bias and the Endowment Effect."

As Scott implied, these seem more like explanations of the mechanism behind loss aversion than a refutation of loss aversion. It's like if someone observed a rainbow, and someone else explained the rainbow as the result of refraction of light by water droplets. The physical explanation is a confirmation of the existence of rainbows, not a refutation!

Expand full comment

A big chunk of what behavioral economists call "loss aversion" is probably what normal economists call "marginal utility". As you alluded to at one point, the college students who are asked about 50/50 chances to win $60 or lose $40 are highly likely to be thinking something like "If I win $60 I get to eat a nicer meal or two, but if I lose $40 I can't fill up my gas tank this week".

Put more generally, the value of your last $100 is considerably higher than the value of your 10,001-10,100th dollars, and loss aversion includes that factor.

Given the result with millionaires and loss aversion down to $20 I'm guessing there's some part of loss aversion that is separate from marginal utility, but it's at least a big chunk of it.

Or possibly there's just a high overlap between people who don't like losing or spending money and people who have high net worth.

Expand full comment

FWIW I am a strong believer that basically all food delivery people (outside of maybe for a party?) do roughly the same job so I make it a point to always tip the same amount without any consideration for the cost of the food I ordered. This amount used to be 3 dollars but I recently increased it to 4 dollars since it's been about ten years since I started ordering delivery food and I figured I should try to keep pace with inflation. With the newer food delivery services though the service range is often very large so I will tip a bit more if the estimated drive time gets above 20 minutes.

Expand full comment

Mary the single mother vs american single mothers just looks like scope insensitivity to me? Admittedly I don't know how large an effect you usually see from that when comparing options side to side. Or is scope insensitivity also fake?

Expand full comment

On identifiable victims: sometimes I write about a big problem and it feels stronger (emotionally) to write in detail about a single instance so the reader can make a personal connection, and sometimes it feels stronger to write with big numbers to emphasize the problem’s scope. They’re different tools that do different things, as I’ve learned in every writing class since high school.

It seems insane to me that anybody would summarize a distinction like this as either “there is a constant ‘effect size’ that makes personal connections stronger than numbers” or “the effect doesn’t exist, so they’re both the same”.

Kind of on the same level as asking whether making right turns or left turns got drivers closer to their destination — it may be a fun anecdote if one turns out to be more common, but if you really want to learn something the heterogeneity is the whole point.

Expand full comment

Typo? Kahneman and Tversky just sort of threw all all this stuff ... [should it read as "threw away all this stuff"?]

Expand full comment

I think it's interesting that you miss a possible explanation for your medical school friends' behaviour – honesty. The 'don't know' option is clearly there to emphasise that it is better, as a doctor, to admit you don't know something than to guess. They didn't align the scores correctly for that, as it's a secondary goal of the test, but I think many if not most people would understand the point and be reluctant to guess wrongly as it has a certain unethical feeling – it certainly does to me.

The idea that the test was just an optimization problem for point scoring and did not have any ethical ramifications can be a blind spot of behavioural economics and rationalism in general. You try to dismiss this with 'the average medical student would sell their soul for 7.5% higher grades' but I don't think they would.

Expand full comment

"Relatedly, Uber makes $10 billion in yearly revenue." Cory Doctorows says it's less, https://pluralistic.net/2021/08/10/unter/#bezzle-no-more

Expand full comment

1. A nonlinear utility function for wealth is often perfectly rational. The mathematical solution to how to choose the sizes of positive-expectation bets so as to grow your bankroll as quickly as possible in the long run, is called the Kelly criterion. It is based on a logarithmic utility function. You choose bets so as to maximize E(log(wealth)).

2. I think the answer to the George Floyd question is the viral video. If instead of a video it was a text summary in some local Minneapolis newspaper, we would probably not have heard of George Floyd.

3. My AP physics teacher was adamant that there is no such thing as centrifugal force. Instead there is centripetal force acting against inertia.

(I would gladly trade 10% of my physics/chemistry/math ability for 10% of Scott's writing ability)

Expand full comment

The George-vs-Mary question is like asking why some gofundmes are successful and why some aren't, which mostly seems to be a question of "do you have a social network that's willing and able to fight for you and recruit more help (which will in turn recruit more help, and so on), or don't you?" A strong cause can help expand the social graph, but it is neither necessary nor sufficient (even life-or-death gofundmes have been known to fail).

Expand full comment

I enjoyed this article but one thing stood out.

"Whoever decided on that grocery gift card scheme was nudging, whether or not they have an economics degree - and apparently they were pretty good at it. "

I disagree! They weren't nudging at all! They were applying standard neoclassical economics to pay someone to do something they wouldn't otherwise do.

Thaler and Sunstein describe a nudge this way (emphasis added):

A nudge, as we will use the term, is any aspect of the choice architecture that alters people's behavior in a predictable way *without* forbidding any options or *significantly changing their economic incentives.*

Paying people (with cash or groceries) to get vaccinated is significantly changing their economic incentives.

And in fact, one standard behavioural economic concept (crowding out intrinsic motivation) might suggest that one shouldn't pay for vaccinations, as then you crowd out the intrinsic motivation (we owe other citizens a duty to be vaccinated) with cash/groceries.

This mistake is made all the time - people describe standard economic analysis as behavioural economics, or "nudges".

I'll give you some examples in vaccination that I think *would* qualify as nudges:

- instead of paying 10 000 people $10 each to get vaccinated, run a lottery and pay one person drawn at random $100 000

- Instead of asking people to opt in to a vaccine, ask them to opt out.

- send them a letter telling them how many people on their street have been vaccinated (if it's a lot. If it's not... maybe don't).

Expand full comment

Regarding Galenter and Pliner, what they focus on is the *exponent* of the curves, and they find that the *exponent* of losses is larger than the *exponent* of gains. Scott summarizes this as "loss aversion", but this is not what it means.

What does an exponent tell you? Let us take exponent 1 versus 2 for concreteness. So the curve for gains looks rather like f(x) = const*x, while the curve for losses looks rather like g(x) = const2 * x^2.

Loss aversion would tell you that f(x) < g(x). But this is a completely different question, and the exponents don't tell us this. If you want to know whether f(1$) < g(1$), then the thing that matters are the two constants const and const2. This is exactly the thing that the exponent analysis tried hard to remove from the picture, because the study wanted to know the exponents!

What exponents do tell us (if the results are scalable to a wide range), it is that we have f(x) < g(x) for *very large* values of x, because a quadratic curve grows faster than a linear one, and at some point the constants will not matter anymore.

Going back to Hreha, he claims

"...the early studies of utility functions have shown that while very large losses are overweighted, smaller losses are often not."

I don't know whether this claim is true or not, but it is absolutely compatible with the exponents found in G&P. In fact, Hreha's claim *requires* that the exponent of losses is larger than the one for gains. (Even more, if the curves were truly of the form const * x and const2 * x^2, then it would imply Hreha's claim, because the function const2 * x^2 is smaller than const * x for very small positive x. But this would require that the fit is accurate for a specific narrow range of x, and that's probably not what the fit was optimized for.)

I buy Scott's analysis overall. The above is a subtle point, which probably got lost at some stage of iterated citations, and apparently it was not important for Kahnemann and Tverski anyway. But in this detail, Scott's analysis is wrong.

Expand full comment

For someone so determined to expose loss aversion as pseudo-science, Hreha was coming across as kind of anti-science. I couldn't quite put the finger on how until I got to

"In my experience, creative solutions that are tailor-made for the situation at hand *always* perform better than generic solutions based on one study or another."

Which, sure: common-sense N=1 ideas that you can't really test but "come on we all know it works" may be the right strategy sometimes (something something seeing like a state). But it's not exactly a moral high ground to demand extreme rigor on others, specially when at least they are trying.

Expand full comment

Great review; thanks. My experience (and decent evidence in the literature) suggest that specific BE strategies can be very effective when there is a gap between intention and behavior. For example, people may *intend* to save for retirement but never get around to doing it. In such situations, switching from an opt-in to an opt-out approach has been proven to activate latent demand for such savings. Active choice -- stopping people in the middle a process and requiring them to state their preference -- can also be effective. (My team used this in a healthcare setting to substantial effect, helping increase the number of patients receiving medications for chronic conditions via home delivery.)

One challenge with these two strategies is that there is no free lunch; unlocking latent demand requires a lot of backend rework to make things easy and automatic for the consumer. In addition, they are counterproductive if there is no latent demand; you're just creating friction for your customers.

But all of this is to say that some elements of BE / choice architecture are alive and well, and their effectiveness is not easily explained by classical economic theory.

Expand full comment

“Behavioral economics” as a set of mysteries that need to be explained is as real as it ever was. You didn’t need Kahneman and Tversky to tell you that people sometimes make irrational decisions, and you don’t need me to tell you that people making irrational decisions hasn’t “failed to replicate”.

To be even more precise, is that people are irrational in somewhat predictable ways. If they were just irrational, you'd expect as many behaviours/responses as there are people/possibilities but, in many experiments/problems, that's not what we're seeing. People make sub optimal/irrational decisions in a way we can predict/exploit...

See https://www.investopedia.com/terms/m/mondayeffect.asp for a simple example.

Expand full comment

Is it possible that, as people become aware of behavioral economic findings, they adjust their behavior and subsequent studies have a harder time replicating the originals?

Expand full comment

I feel like a bit of a broken record always talking about COVID on this blog (I have other interests, I swear!), but this part seems disagreeable:

> Nobody believes studies anymore, which is fair .... There are about 90 million eligible Americans who haven’t gotten their COVID vaccine, and although some of them are hard-core conspiracy theorists, others are just lazy or nervous or feel safe already.

"lazy or nervous or feel safe already" is not the least charitable way to describe them but it seems pretty close. How about this rephrasing:

"There are about 90 million eligible Americans who haven't gotten their COVID vaccine, and although some of them believe that scientists are frequently and deliberately collaborating with each other to generate fake consensus about untrue claims, others just don't believe the studies that support the vaccines are reliable due to the normal array of scientific errors and biases."

Really, the bloody minded focus on "nudging" people into taking vaccines (more like shoving) is one of the best ways to create resistance to a proposal. The moment a government starts "nudging" people, you're basically taking a position that you're more rational than them and that no argument, regardless of how well phrased or debated, can possibly get the masses to do what is best for them, because they are stupid. But people don't like it when government officials imply the citizens are stupid and the officials are enlightened, partly because both world history and the present day have cases where that idea got taken too far, leading to lots of people ending up dead or in camps.

Governments should just be focusing on providing as much high quality, trustworthy data about vaccines as possible and then just leaving it there for people to study, poke at, and pick up on their own initiative (or not). Instead a few of them are openly talking about building various kinds of inside-out open air prisons in which ordinary doors and walls act as the fences, or even putting the unvaccinated under perma-lockdowns (i.e. house arrest). This simply says, "we can't win the arguments on their merits so we have to force you to comply", which in turn makes whatever they want come across as much more dangerous.

Expand full comment

As a non-US citizen, I am baffled by the usage of tipping as an example for a "rational economic actor". I get the point you're trying to make about nudging, but tipping is much more of a cultural custom than anything that is done for economic purpose. Customer service still exists in the countries where people don't tip, after all.

To me it seems that US citizens tip for the same reason Russians remove shoes when entering the house, or Swiss shake both men and women's hands when greeting - it just feels weird not to. Case in point, Americans tip abroad too, when there's absolutely no incentive to do that.

Expand full comment

> I knew all this, but it was still really hard to guess. I did it, but I had to fight my natural inclinations. And when I talked about this with friends - smart people, the sort of people who got into medical school! - none of them guessed, and no matter how much I argued with them they refused to start. The average medical student would sell their soul for 7.5% higher grades on standardized tests - but this was a step too far.

A great microcosm of why behavior Econ is bad IMO. I had tests like this, except for math competitions, and everyone guessed. It’s all very local and specific and contingent in ways that behavior economics’ methods are neither equipped nor desiring to measure.

> Nobody believes studies anymore, which is fair. I trust in a salvageable core of behavioral economics and “nudgenomics” because I can feel in my bones that they’re true for me and the people around me.

Psychoanalysis, behaviorism, Christianity, faith healing, homeopathy, chiropracty, new age cults, hypnosis, etc. again, it’s all local and depends on so many different things. But it’s not generalizable at all in the way they imply - a different person who had learned different things in childhood (and think learning in the sense of learning math or sociability, not “u were too nice to child so he is narcissist” type psychoanalysis nonsense popular a hundred years ago) or just was in a different situation (poor person who grew up on farm taking the survey who really cares about doing what he’s supposed to to succeed vs rich kid who grew up in school and has learned to just daze through tests he takes and just wants to get over with the survey quickly). It’s totally possible for a population or cultural happening-local phenomenon to be true, but also only true in the sense that people choose to do that because they were taught it’s rude not to tip 20% or something and if they were told not to they wouldn’t, and that seems very different from the sort of claim that I sense from the field.

> Galenter and Pliner

Based on demonst’s thing, it seems like they didn’t find instantaneous loss aversion, but large scale loss aversion, which is much more “diminishing marginal utility” style? dunno

> not rioting at systemic racism

Needing a scapegoat or martyr to riot is *very different* than caring only about a scapegoat or martyr. Millions of people care very deeply about systemic racism and blacks people in America - correctly or not - and they did before Floyd. The dynamics there are not at all well described by a general “population or individual”. Individual cases are easier to *prove*, as we saw with the video that made it blow up - literal thousands of other cases of individuals did not gain traction because they didn’t have good videos or evidence despite being individuals. And plenty of other population phenomena cause protests!

Expand full comment

On the Identifiable Victim Effect: for many things there are lots of examples of individual victims, but only a few go viral.

A good example here is people unable to afford healthcare using crowdfunding sites. Some people are able to raise tens or hundreds of thousands. Many more get a few hundred bucks and that's it. But a lot more donations are made to crowdfunding than to healthcare charities that provide care for people who can't afford it (as distinct from other types of healthcare charity, like research).

Perhaps if you had a thousand different stories about a thousand different single mothers written by a thousand different people, three of them would raise a lot more money than the generic story about single mothers in the abstract, and the other 997 would not show an identifiable victim effect.

There were a bunch of proto-George Floyds, like Tamir Rice and Eric Garner, who went viral but less so. There were lots of local ones who went even less viral than that. Whatever it was about Floyd that got his story to go viral in a way that the others didn't, that's the Individual Victim Effect. Perhaps it should be called the Sympathetic Individual Victim Effect or something?

Expand full comment

"you value something you already have more than something you don’t."

Really sounds nitpicky vs "loss aversion" - wouldn't they produce the same behavioral outcome precisely 100% of the time? Can anyone delineate Loss Aversion and Endowment Effects?

Expand full comment

"All subjects were entered into a raffle to win a gift certificate for participating in the study, and they were offered the opportunity to choose to donate some of it to single mothers. Subjects who saw Ad B didn’t donate any more of their gift certificate than those who saw Ad A. This is a good study. I’m mildly surprised by the result, but I don’t see anything wrong with it."

I wonder about this. I see a lot of ads in magazines etc. which do this exact thing - "Here is John, who is living rough on the streets for three years since his abusive stepfather threw him out of the family home at the age of fourteen". Usually there's a small-print disclaimer about "Photo is of actor, not of real homeless person" but you can generally *tell* that this is indeed an actor pretending to be the real person, the same way that radio ads where it's purportedly Mary and Sheila talking about this great new furniture store that just opened, you know it's two actresses and not real customers.

So maybe that has an effect - when you can tell this is "Actor playing a part" it doesn't hit you like "this really is a kid sleeping rough" where you would see them in a news story or documentary.

I think it being in a raffle also had an effect; this wasn't people choosing to make a donation based on General Ad A or Personalised Ad B, this was people being asked to give up part of a prize. I think in that case people are making decisions based on "how much do I think is reasonable to give, out of the prize I won?" rather than "will I give my donation based on how hard my heart-strings were tugged?"

(I hate heart-strings tugging campaigns because I *know* they are trying to emotionally manipulate me, and this annoys me so much I deliberately *won't* donate to such efforts).

Expand full comment

Without knowing much about the details of 'behavioral economics' it seems to me the criticism of the detractors is largely that while some effects can be shown in studies, it is not nearly as neat and generalizable a concepts such as 'loss aversion' would suggest. A lot of it is either fairly common sense, or rather very complicated effects, but giving it the branding of 'behavioral economics' and reducing it to a couple of basic tenets is a marketing strategy of Kahnemann etc. rather than scholarly ingenuous.

Expand full comment

I may have read about it in the book "Nudge" but one example that stood out for me was for new employees and 401K plans. Although it's a great idea to sign up many don't because of a lack of understanding and a menu of confusing "investment choices" which almost nobody wants to learn about and have to pick from. By making it opt-out and offering a choice to "just take the default most recommended investment allocation" the participation goes way up. These are not the precise details but in these kinds of cases, I can see where a "nudge" can make an out-sized impact.

Expand full comment

To me, arguing whether loss aversion *or* status quo effects are real is a bit like arguing whether the world is made up of centimers or inches. They're just different representations of the world, describing large chunks of the same territory through slightly different lenses. Maybe one of them will prove slightly more useful than the other? Anyway, I agree with Scott that this is a far cry from proving loss aversion wrong.

Expand full comment

Is this the same Jason Hreha who founded the Walmart Behavioral lab - the first Fortune 50 behavioral economics team? Who made and sold a startup, is in Stanford, and is listed along with Dan Ariely in a behavioral scientist web site as co-authoring an article?

If so, this is more interesting since it is Hreha repudiating his own considerable economic success.

Expand full comment

This comes across as too cute by half.

As I understand it there is no career/professional value in producing a study that says, "I attempted to prove that the conventional wisdom X is false. After 18 months of study, reams or data and careful analysis I've come to the inescapable conclusion that the conventional wisdom is in fact correct."

Expand full comment

The idea of risk seems a bit absent here? It that how it really is in this field?

Investment banking acts to maximize expectation, with low levels of risk aversion, and we have global financial collapse every time regulatory restrictions are relaxed a bit to allow them to make more money.

Black Swan or Skin in the Game are probably good books on this, but the claim Taleb makes is that we are woefully bad at predicting risk; and that actually what are measuring when we can measure risk aversion is the level to which economics is struggling to understand risk and payoff, not people in the street who generally do a better job of keeping their affairs in order than businesses employing lots of economics/financial/probability experts that constantly need government bail-outs.

Wilmott (the quant, with a journal of the same name) argues quite persuasively in various places that even our current idea of "correlation" as defined in probability theory, is worse than useless for understanding risk and payoff.

Expand full comment

I could have sworn that Thinking Fast and Slow talks about loss aversion in terms of a faster-than-linear curve (maybe y=x^2, or something like that). Maybe they didn't say "small losses don't really matter" in english, but if you draw the curve near zero, its apparent.

Is my memory off?

Expand full comment

"Unfortunately, the findings rebutting their view of loss aversion were carefully omitted from their papers, and other findings that went against their model were misrepresented so that they would instead support their pet theory. In short: any data that didn't fit Prospect Theory was dismissed or distorted.

I don't know what you'd call this behavior... but it's not science."

You know, I'm reading Structures of Scientic Revolutions, spent over a decade in academia, and spent a decade in an industry turning science into products. This sounds exactly like science to me.

Expand full comment

There's a cliche that 90% of human behavior involves giving, receiving, or bartering for attention. These three activities seem to me to correspond to production, consumption and exchange in economics. That is, exchange theory alone is probably an insufficient foundation for behavioral economics.

Expand full comment

This is a brilliant defense of the field and I'm really grateful for it! Another thing that I believe to be reasonably unscathed by the replication crisis is research on present bias (aka akrasia) and commitment devices. Phew for us!

PS, not a correction to Scott's post per se but maybe a correction to an impression readers will likely have. I don't know if it actually matters but is interesting:

Hreha posted that article a full year ago. It reads as (and is) a perfectly apt reaction to the Ariely affair and I presume that when Hreha noticed it being circulated he just savvily removed the date from the article so people wouldn't be distracted by that or write it off as insufficiently timely. (I actually checked the internet archive and he made no other change besides removing the date.)

Expand full comment

"Previous criticisms of loss aversion argue that most experiments are performed on undergrads, who are so poor that even small amounts of money might have unusual emotional meaning. Mrkva collects a sample of thousands of millionaires (!) and demonstrates that they show loss aversion for sums of money as small as $20."

The millionaires are interesting, but I suspect they aren't thinking about the real sums but instead about their relative terms. To me, all of these experimental propositions feel less like "Would you like to win a small sum of actual money?" and more like "Come up with a correct heuristic for comfortable gambling on sums significant to you."

Personally, I find it weirdly difficult to isolate a single instance of a favourable-odds gamble from the possibility of a ruinous, if quite unlikely, losing streak under the same odds.

Expand full comment

Re: the Endowment effect, I don't think it's necessarily a cognitive bias as it is just a premium on information. In a world where people are occasionally swindled by others, the expected return on a trading your mug for another that someone else *claims* is identical is, in fact, negative - once you account for a >0% chance that they might be trying to pull a fast one. The researchers might know for a fact that the endowed coffee mug and the one they offer are the same, but that fact isn't available to the study participants, so it is rational for them to ask for a higher price to offset their risk of losing an apparently functional coffee mug. If the subjects are allowed 10 minutes to study and fidget with mug A, then given 10 minutes to study and fidget with mug B, and *then* asked which one they wanted, and they mostly choose the first one anyway, then that's weird and seems to fit the bill for a bias. Maybe those studies exist, does anyone know? To me it seems that what gets cited for examples of the endowment effect is (like in Kahneman, Knetsch and Thaler's study) where researchers take two things that should be at the point of price indifference on the open market, note the subject's preference for the one they know more about, and then wave their hands and say "loss aversion!"

Expand full comment

This doesn’t have much to do with loss aversion or the main point of the post, but when you talked about choosing a tipping amount, it reminded me of a thing I did in high school. (I’m not sure if I came up with this thing myself or heard of it from somewhere else. For all I know, this is part of some famous psych study.). I would write the numbers 1 2 3 4 spaced out horizontally on a piece of paper. Then I would ask a random person (well, as random as I could conveniently manage from my high school) to choose one. On the back of paper, I had already written - why did you choose 3? Because it quickly became obvious that 3 was the overwhelmingly favorite choice. In my not-quite-random, n = 100ish experiment, 3 was chosen about 90% of the time, 2 was chosen about 10% of the time, and I don’t recall anyone choosing 1 or 4. I never let anyone participate that had witnessed anyone else making a choice, and I didn’t let them know the choice distribution before they made their own. I wonder if your choice of a tipping amount (3rd choice out of 4 ascending options) is almost the same thing, whatever that thing is. I never searched for an explanation, but it was a fun way to pass through the boredom of public schooling.

Expand full comment

Given that Loss Aversion and other behavioral economics theories are largely applied to sales and marketing applications, it seems that you could aggregate and genericize sales data from e-commerce platforms that compare a loss-aversion framing to another, more neutral framing. From a practical standpoint, we do this kind of testing all the time. So it seems like these A/B testing could be modified to provide alternate data points on this topic.

I understand that this method may not provide a perfect testing environment. I'm just wondering if gathering data from "real world" experiments would provide additional reference points.

Also, one has to wonder if companies like Google and Amazon already have reams of this kind of data/analysis that we don't have access to.

Expand full comment

I used to work in retail (entry level starting in high school), and the company I worked for did sales just about every week. Sometime in the middle of the week the specific prices and items might shift around, but generally speaking almost everything was on sale for 40-60% off of the "suggested retail price" every week. A new CEO came in and got rid of the sales, and just started doing Walmart pricing - cheaper all the time. The prices were pretty much identical, and so were the items, but didn't involve coupons, waiting for a good sale, or anything else.

The customers hated it, and sales dropped like a rock. It turns out, they really liked getting a "good deal" on a higher priced item. They felt like the $100 price tag meant it was a qualify item, but the $40 sale price meant they were getting a steal. Just selling them a $40 item was a low quality item and no discount at all! I think it works for Walmart because people expect fairly low quality and believe the prices are unusually low. Experience shopping there seems to confirm both are true.

Expand full comment

The think with your george floyd anecdote is that even if the identifiable victime effect wasn't real, it doesn't mean people would *never* rally around an identifiable victim, its just that they aren't more likely to do so. So its entirely plausible that for other factors, or just randomness, people rallied around an identifiable victim in this case

Expand full comment

I think you poisoned the well unintentionally with "statistics don't start riots" because now people are fixating on riots. Rioters were an infinitesimal proportion of people who were moved to care or act in some way by the recording of a single individual's personal experience, but were not moved to care or act in the same way by statistics. Those can't just be explained away by "well, people like to riot."

I think what you may be missing is the study design of contrasting just statistics with just an example of one single mother isn't comparable to something like a video of police beating up or killing a black person. The latter isn't happening in a vacuum. People being moved to do something after watching George Floyd isn't a confirming example of identifiable victim effect. Those people are still reacting to an aggregate of events; that is, they're reacting to statistics, or at least to their perception of statistics. The reaction is to the belief that this kind of behavior in police is pervasive and widespread. The one video is just a precipitating event. It didn't cause the response on its own. "Straw that broken the camel's back effect" is not a cognitive bias. It's just the way multifactor causation works.

Expand full comment

The endowment effect is a nice example of the ambiguity of the claim that behavior is irrational. At first glance, it is irrational to value things you have more than identical things you don't have. But that pattern of values makes sense as a commitment strategy for enforcing property rights. If I am willing to fight hard to defend something I have and other people know that, then with luck people won't try to take things away from me and I won't have to fight for them. If I am willing to fight hard to take things other people have, I am likely to get into a lot of fights. Think of it as analogous to territorial behavior in animals.

I discuss some of this in an old article:

http://www.daviddfriedman.com/Academic/econ_and_evol_psych/economics_and_evol_psych.html

Expand full comment

I have a hypothesis on the identifiable victim effect question. The identifiable victim effect doesn't exist (at all). Rather people respond to archetypal stories and sometimes an archetypal story is easier to construct using identifiable characters. George Floyd is actually some kind of archetypal story about the powerful abusing the weak told though the modern medium of a smartphone camera, and perhaps the combination of plot, characters and medium (and performance) is effective in telling that archetypal story in a way that statistics about police violence cannot be.

This superficially sounds the same as Hreha's observation that "... creative solutions that are tailor-made for the situation at hand *always* perform better than generic solutions based on one study or another." but I don't think it is. For one thing this places finite and measurable bounds on definable characteristics and suggests a path to producing a fully testable hypothesis that describes a framework for engagement with issues through media. Such a hypothesis, if you were able to even partially validate it, would in turn suggest paths to moving the "creative arts" into the realms of social science.

I imagine that you could describe stories (including interventions through media etc) as n-dimensional positions in vector space and assess the effects they would have on viewers. Actually, now that I think about it, this is all we're doing with recommendations on netflix, youtube etc, except we want to measure actions and sentiments about things that occur off the platform and we want to test the idea that certain elements of presentation relate to the literary structure to produce those different outcomes....

This isn't my field, so feel free to correct my ignorance.

Expand full comment

Ah, behavioral economics.

I've read Thaler's "Misbehaving", and the story it tells is of a discipline fixated on an axiomatic notion of rational actors (using a particular definition of rationality), to which behavioral economics, with their experimental, interdisciplinary, real-world approach was a necessary correction.

Even remembering all researchers overblow their work's impact and importance, and trying to be careful about accepting anything that confirms my priors, I see no reason to assume the story isn't roughly true. This would make behavioral economics an important step in the progress of scientific paradigms even if all of their specific theories turn out incorrect, simply by pointing in the right direction that would otherwise continue being ignored. (The first question I would ask of its critics to assess whether they're worthy of listening, then, is "what's your alternative?")

There is, of course, a much less charitable interpretation of the above, which is that behavioral economics constitute, to paraphrase Robert Skidelsky, "not any new insight, but technical prowess in making an old insight acceptable to the economics profession". This impression is exacerbated by the fact that practical applications pursued by their practitioners turn out to be some "nudges" on the margins, aiming to exploit the "irrationality" or lead the "irrational", "misbehaving" people towards a more "rational" outcome. Essentially, all its momentum comes from catching up with advances in other fields of social science, and adapting them in the way that left the entire discipline of economics, its goals and underlying assumptions intact. If one thinks economics is in a serious need of paradigm change, well, this clearly ain't it.

On yet another end, it still means the field of economics can now direct its funding into sound empirical science, which seems to benefit everyone. (I now see many psychologists and other social scientists cite and praise economists' research, which, given psychology's recently exposed thorny relationship with research standards, obviously.)

Expand full comment

Regarding the medical test story it is not so clear to me whether it is as simple as all that.

Let's say we have N questions, modelled by N not necessarily iid discrete random variable X_i taking values in { 1, 0, -0.5 } and also modelled by Y_i taking values in { 1, -0.5 }, it may still be possible that E(\sum X_i) >= E(\sum Y_i)

Here's an attempt at a proof. Let's take just one question. We can represent probability distributions over the three choices as points on a 2-simplex, i.e. (p_1, p_2, p_3) such that \sum_i p_i=1 and p_i \in [0,1]. Their expectation given the scores {1, 0, -0.5} is just a linear map to R. Similarly, we can represent the probability distribution over just the two choices as the 1-simplex face of the above 2-complex. Now, for example, the uniform distribution is a point in the centre of the base of the 2-simplex. Now 0.25 is a regular value of the map so it's inverse image is a 1 dimensional submanifold of the 2-simplex (treated as a manifold with boundary) meaning there is a curve from the point (1/2, 1/2, 0) representing the uniform distribution over just T and F to the interior of the 2-simplex. Which means I can have a distribution (p_1, p_2, p_3) which gives the same expectation.

What is my intuition behind this? Say for a given question I know that I don't know the answer is True as much as I don't know the answer is False then it make sense to assign them equal probability. However if I am slightly less confident of one answer over the other -- this seems more realistic - while still being in the region of guesswork (for me) then the probabilities to be assigned will not be uniform. That is, the Bernoulli parameter associated with the question is itself not uniformly distributed. In fact the Bernoulli parameter for a question need not even follow a discrete distribution and in that case I have to further figure out how to compress it to one number.

Expand full comment

"But also: there are several giant murals of George Floyd within walking distance of my house. It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”. But somehow those statistics don’t start riots, and George Floyd does. You can publish 100 studies showing how “the Identifiable Victim Effect fails to replicate”, but I will still be convinced that George Floyd’s death affected America more than aggregate statistics about police shootings."

I'm surprised this point didn't go in a different direction. I think with the notion of Identifiable Victim Effect there's a lot of context missing. And I suspect this may be true of a number of different biases. I think the George Floyd poster, and all its attendant implications, has to do with *stacking* - look at the parentheses: "(and Trayvon Martin, and Michael Brown, and…)"

The same point can be made re: marketing efforts based on nudging - the cheesiness of a particular marketing campaign is not only a function of what the nudging seeks to achieve but also the zeitgeist-based (rolls eyes...) context in which it functions. Which I guess I why music, and marketing, needs periodic reinvention.

The broader point here is I would love to see research looking at how biases operate contextually - how many publicly-adjudicated Identifiable Victims does it take to make for a population group to start exhibiting bounded rationality?

Expand full comment

I have two basic questions about loss aversion and wondering if these are beside the point or have been addressed in the research:

1] The experiments you generally read about are with small sums of money, say $100. If a person has a total wealth of say a few 100k or more (you may include any discounted future earnings in there if you like), at the scale of $100 my utility function should be ~linear. So I should be risk neutral. So in order to establish loss aversion experimentally wouldn't you need to be dealing with sums that are actually material to the person?

2] It's all good to present a fictional bet in an experimental setting, but in the real world someone needs to take the other side of the bet. Say everyone's loss averse, one person's loss in another person's gain, how do you get to an equilibrium?

Expand full comment

Behavioural economics is trying to solve all of economics by first solving all of psychology. If you think about it, this really is the logical end goal. The plan is to predict exactly what each individual is thinking and then make economic predictions factoring in each and every possible action (or at least the average) a human could make. This seems so stupid. How is this the cutting edge of economics? Why can't we just give people free healthcare?

Expand full comment

This is Steve Sailer 101, sans golf course design references.

(BTW, Steve, I played Dunes Club a few weeks back - it is that good. They change the pins between 9s.)

Behavioral economics and social psychology tried to make iron rules of human behavior. Human behavior is constantly mutable. So iron rules do not exist.

Expand full comment

>It sure seems people cared a lot when George Floyd (and Trayvon Martin, and Michael Brown, and…) got victimized. There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops”. But somehow those statistics don’t start riots, and George Floyd does.

The ~1000 number is specifically for those fatally *shot* by police, not deaths from all causes. That might seem like a quibble, but between that and the 'police' qualifier the statistic is only capturing one of the three specific cases listed - and not the most salient one, either.

I'm uncertain how much I'd disagree with the point being made even if it was off by an order of magnitude, but it's a notable error when used for rhetorical flourish and IIRC Scott has made it before. Might see if I can dig up the precedent...

Expand full comment

Hi Scott, I think you're generalizing too much from your own experience. When I faced multiple choice tests with negative markings I always calculated the optimal strategy for guessing beforehand and stuck to it. E.g., in physics/math GRE there were four choices (or five, I don't exactly remember how many) per question. If one were to guess completely at random then one would get zero or negative score in expectation. However, if one could eliminate even one choice then the expectation would become positive. So whenever I could eliminate one choice I answered it. I know many other people who did the same. Similarly for tipping, I calculate the amount of tip (10% or 20%, so it's an easy calculation) and add it separately. Behavioral economics can't account for such behavior.

Expand full comment

Dude, write the names correctly. It is Yechiam and not Yechaim.

As a hebrew speaker it really made me a bumpy reading.

Expand full comment

To me "behavioral economics" is the study of the Yogi Berra conundrum:

> Nobody goes there anymore. It’s too crowded.

Expand full comment

I think so much of this (the actual argument AND the meta-argument) boils down to the transition

(no opinions) -> heuristics -> ideology.

This is a pattern one see everywhere.

You want to buy a car. If you're like most people, you just don't care. You had Toyota, it was good. Then a Ford, it was good. Now you can get a good deal on a Kia.

You want to buy a phone. Well you've used Macs and you like them. You had an iPod and you liked it. The heuristic "I like Apple stuff" seems to work for you. You can spend a month researching phones, or you can go with the heuristic.

But somehow (and I think this is a transition that has been GROSSLY UNDER-THEORIZED in social science) a heuristic can become an ideology. My heuristic (hah!) until I see evidence otherwise is that this is essentially a transition from "I like X" to "I hate not X".

The heuristicist is happy with his heuristic and couldn't care less what choices you make.

The ideologue finds it essential to defend every bad choice Apple makes, to attack every good decision Intel makes. This rapidly shades into hanging out with the Apple people to make fun of those stupid Windows people, on to "how can you go out with someone who doesn't just like x86 but who work for AMD???"

This is everywhere!

Someone uses the heuristic "white suburbs" as a way to solve the problem "I want a quiet neighborhood". All they care about is the quiet part. But those who see the entire world as ideological (along this particular dimension) cannot believe that someone just made a quick choice of this type of neighborhood for this type of reason -- clearly they MUST have been motivated by racism. After all, people are divided in Chevy people and Ford people, there's no such thing as a person who just doesn't give a fsck about their brand of car...

We start with the heuristic "wearing a mask is probably worth doing in spite of the hassle".

In some people this transmutes into "I HATE non-mask-wearers", and because no-one's willing to admit this, we get tortured excuses about "well if they don't wear masks it results in a worse experience for the rest of us". Perhaps true, but when the battle shifts to invermectin and their choice has ZERO influence on your future health, it's still all about hating the other.

This transition from heuristic to ideology seems very easy. Cases based on products are especially valuable for understanding because most of us both have some products for which we have all three relationships: we can't imagine that anyone especially cares about their brand of TV, while caring a lot about a brand of soda, and shunning people who listen to the wrong music.

This is often explained as tribalism, but I'm not sure which comes first. In a lot of cases to me it seems like the heuristic comes first, it transforms to ideology, and then a tribe is discovered. (Maybe that's the loner path, the tribal first path is more common? But on the internet, for fan type things, it definitely seems like the order is often heuristic -> ideology ->tribe.)

So back to the article.

What I see here is an example of this sort of thing. The Behavioral Economics guys are making a bunch of observations (which can be viewed as heuristics -- people will often engage in Sunk Cost Fallacy, people will often engage in Loss Aversion; if you don't have better data that's the way to bet as to their behavior). But in some individuals this gets transformed from a heuristic to an ideology or the opposing ideology.

For example I don't get A DAMN THING about the anti-nudge people. They seem to be too stupid to understand that EVERYTHING about a form or procedure or default is a choice, so why not design the defaults as best for society -- with "best for society something we debate and vote on if necessary". But anyway, you have have these anti-nudge people around, and they have their ideology; not just a heuristic that "nudge procedures are probably bad" but full-on "anyone who ever has anything nice to say about nudge-related issues is my MORTAL ENEMY". And that seems to be everything about why the article was written by Hreha.

And of course it goes all the way. Scott wrote something about this many years ago:

https://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/

which I would summarize as "utilitarianism is a good heuristic -- but it's a HEURISTIC". You can either accept that as a heuristic there are cases where it fails, and try to figure out a better understanding of life -- or you can convert utilitarianism into an ideology, and willingly drive over the cliff if that's what your heuristic tells you to do.

Most of our political insanity seems to derive from what I've been saying -- people who can't tell the difference between heuristics and reality (ie when to accept that the quick answers of the heuristic might be invalid/sub-optimal); and people who refuse to accept that sometimes a heuristic is just a heuristic, not a buried ideology.

Expand full comment

> > When the two biggest scientists in your field are accused of "systemic misrepresentation", you know you've got a serious problem.

Not necessarily? It just means your field is big enough to have accusers in it.

> There are lots of statistics, like “US police kill about 1000 people a year, and about 10 of those are black, unarmed, and not in the process of doing anything unsympathetic like charging at the cops” ... But somehow those statistics don’t start riots, and George Floyd does.

According to mappingpoliceviolence.org, the US police kill over 30 unarmed black people each year, and over 100 unarmed people of all races.

Anyway, to suggest Identifiable-Victim-effect-is-not-a-thing is obviously silly if you compare X identified victims with X unidentified victims. If the finding is "1 identified victim 'only' feels as bad as X unidentified victims" for X>100, uh, identifying a victim has a huge effect.

> Ad A: A map of the USA. Text describing how millions of single mothers across the country are working hard to support their children, and you should help them.

> Ad B: A picture of a woman. Text describing how this is Mary, a single mother who is working hard to support her children, and you should help her.

But of course, real life charities normally combine both: highlighting one victim, then citing statistics about the large number of victims. Did they not test the usual combination? Odd.

Expand full comment

"If some sort of behavioral econ campaign can convince 1.5% of those 90 million Americans to get their vaccines, that’s 1.4 million more vaccinations and, under reasonable assumptions, maybe a few thousand lives saved."

Your math here seems like it might be using the wrong denominator. If 240/330 million Americans are currently vaccinated, then a 1.4% increase should mean either 1.4% of 240 or of 330, depending on what's being measured. In either case, it means the effect is bigger than you said!

Either way, dismissing a 1.4% effect size as generally irrelevant is insane to me.

Expand full comment

You should certainly read this piece by a little-known author to understand in what aspects George Floyd differs from a random Mary: https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/

I witnessed the Identifiable Victim Effect firsthand. Friends of my girlfriend were raising money for Zolgensma for their newborn. It’s Pretty Damn Expensive at about $2 million, but it cures the condition which is otherwise debilitating. To my surprise they succeeded, but the very fact they did it indicates that the donors were willing to save the life of one child at a cost which can surely save so many more people.

Expand full comment