414 Comments
deletedMar 4, 2022·edited Mar 4, 2022
Comment deleted
Expand full comment
deletedMar 4, 2022·edited May 10, 2023
Comment deleted
Expand full comment

Rationality is about doing and believing what one has reason to do and believe. I take it then that no one really opposes rationality, but rather there exist disputes as to what counts as a reason to do or believe something. Members of one class of potential reasons have, for unfortunate historical reasons, been lumped together under the banner of "rational" reasons. Then, the debate about rationality is whether there exist reasons beyond those which commonly bear the title of "rational" reasons.

Expand full comment

Rationality is whatever helps us improve the reliability of the conclusions of "slow thought."

I'm using "slow thought" in the Kahneman sense, contrasted with "fast thought." Rationality can help "fast thought" only indirectly. Each kind of thought is useful in different contexts.

"Slow thought" goes by reasoning, and reasoning from incorrect assumptions or using incorrect tools is easy to mislead. Even in the absence of bad actors, incorrect thinking leads to stupidity. With bad actors, careless reasoning is easy to exploit (looking at you, QAnon). Rationality is trying to remove errors in "slow thought."

I think the Pinker/Gardner debate is talking past each other because they don't seem to be directly addressing the difference between fast and slow thought, and each has picked one of those to champion - which is silly, because they're tools for very different purposes.

Expand full comment

Hmm, I haven't read much Pinker or Gardner. Given that this article doesn't really define what it means to be rationalist, I'm going to fall back on my intuitions.

No, no, I joke! But I do have a couple of comments:

Intuition has multiple meanings or nuances; Scott is only addressing one of them, the "Intuition is really a reaction to a complex set of observations, and the observer isn't aware of how that complex set leads them to the conclusion they reached." But there is another meaning of intuition, which is "The ability to tap into knowledge which is not available to the intellect," such as through prayer or meditation, for the sufficiently attained or lucky. Now, it may be possible to train an AI to do that, or to interrogate a bunch of world-class meditators about who they did it, but it seems unlikely in the near to medium term future, at best.

Does Pinker address the Spock as strawman (strawVulcan?) rationality? I mean, Gardner seems to be making the "what does your rationality have to say about emotions, eh?" argument.

And I think that's a valid argument, in that most self-described rationalists I deal with (mostly libertarians, unfortunately) seem to have a really hard time dealing with emotions, and in particular to recognizing when they are buying into emotionalist appeals.

Lastly, rationality seems to be about intellect. And as Galef points out, emotions are often the source of our goals and desires. I mean, rationality can tell me whether one banana is better than another banana, but it can't tell me whether I like bananas more than oranges. Sure, it might be able to tell me which one is better for me, but it can't tell me which one I like more.

Expand full comment

I think Yudowsky's "systematized winning" has some significant overlap with your final idea if you focus more on the "systematized" part.

Expand full comment

Love the post, but one aspect I think Scott might have missed though... the culture of it.

I think when Gardner says he's doesn't like rationalism, he's at least partially saying, "I don't like this culture of people who are into math and board games, like or at least don't dislike polyamory, celebrate weird holidays like solstice but not in a way I am familiar with, are mostly white, too often male,... etc." He doesn't see rationalism as systematized winning, but as this culture.

Expand full comment

The type of humor in this essay is I suspect one of the things people miss about older Scott posts. Been a while since it was this concentrated.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I wonder if you're overthinking this a bit. I agree that

> Everybody follows some combination strategy of mostly depending on heuristics, but [sometimes] using explicit computation

In my view, describing oneself as a "rationalist" or "non- rationalist" is just a way to say which side it's better to err on, in close cases. (Compare "trusting" vs "distrustful" — everyone agrees that you should trust *some* claims. But a generally trusting person errs on the side of trusting more claims).

If we wanted to be more jargon-y, we could rephrase the above by saying: "everyone agrees to sometimes use heuristics; a rationalist uses the meta-heuristic of using explicit calculation in cases where the best strategy is unclear whereas a non-rationalist uses the meta-heuristic of relying on heuristics in those cases"

Expand full comment

"And even if at the end of the day, the bad guys turn out to be more correct scientifically than I am, life is short. And we have to make choices about how we spend our time, and that’s where I think the multiple intelligences way of thinking about things will continue to be useful, even if the scientific evidence isn’t supportive."

https://www.youtube.com/watch?v=ESGLRnitp4k?t=45m

Expand full comment

I'm not sure if this is actually relevant, but your final paragraph reminds me of an argument I've had several times about the "pie rule". ( https://en.wikipedia.org/wiki/Pie_rule )

There's some game that's unfair, and someone suggests making it fair by allowing one player to assign a handicap to one side and then the other player to choose which side to play, given that handicap. This incentivizes the first player to pick a handicap that makes the game as fair as possible.

The person making this suggestion often argues that if you are good at playing the game, then you should automatically also be good at picking a handicap that would be fair. And thus, the pie rule is meta-fair in the sense that the winner of the-game-with-pie-rule should always be the same as the winner of the-game-starting-from-a-hypothetical-fair-position.

I disagree.

I think the intuition for their claim is something like: One possible way of picking winning moves is to consider every position you could move to, estimate your degree of advantage in each one, and then pick the move that leads to the largest advantage. And if you can determine the degree of advantage of an arbitrary game-position, then you can also pick a fair handicap just by choosing the advantage that is closest to zero, instead of the largest advantage.

But that is not the ONLY possible way of picking winning moves. You could pick winning moves even if you can only RANK positions, and cannot score them on any absolute scale. You could even pick moves by applying some heuristic to the moves themselves, rather than to the resulting game-positions.

If you just have a black box that outputs the best move, you can't use that black box to pick a fair handicap.

This sounds a little bit like the idea that "skill at X" and "skill at studying X" are not (necessarily) the same.

X and studying-X are fundamentally related in a way where if you had a complete understanding of the nature of X that allowed you to predict exactly what would happen in any given situation, then that should make you good both at X and at studying-X.

But this doesn't rule out the possibility that there could be lesser strategies that make you good at one but not at the other. A black box that makes money doesn't (necessarily) give you a theory of how money-making works. A pretty-good-but-not-perfect theory of money-making won't necessarily let you outperform the black box.

Expand full comment

Isn't he explicitly talking about Tooby & Cosmides ecological rationalism? This isn't "an argument about heuristics. It’s the old argument from cultural evolution: tradition is the repository of what worked for past generations."

Expand full comment

The individual action of engaging in rationality clears the dust out of the intuition pipes. Then the next time the angel flies by to deliver an inspiration, they have a direct shot.

This is flippant but in terms of discussing the symbiosis of rationality/nonrationality, it’s a necessary point. lots more to say but busy now - this was a good read.

Expand full comment

Most explicit anti-rationalism I encounter boils down to "think locally & politically, because abstract first-principles thinking sometimes leads to socially undesirable conclusions." Of course that's mostly a complaint about how some perceived rationalists behave (antirational "read the room" vs rational "just asking questions") rather than a principled objection to rationalism in the abstract, but then that's exactly what a rationalist would say...

Expand full comment

There is the concept of rational irrationality in which it is instrumentally rational to not be so epistemically rational. Being rational is the best way to be correct. However, being correct is not always worthwhile, but it frequently is.

I think that most people have little to gain by being rational about many topics, and frequently being correct lowers the quality of their life and wellbeing. I'm thinking of religious apostates and political dissidents who are actually correct about their beliefs, but not in line with the larger society. It is also not useful to try to analyze every situation in depth, like in the case of the spam emails that you mentioned.

Expand full comment

I agree that Pinker is being knowingly glib by equating the mere act of reasoning with the explicit practice of rationality. I haven't read his book (yet), but Gardner's objection brings to my mind Elizabeth and Zvi's Simulacra levels.

As in, committed, self-identified rationalists (and chauvinistic monists, for that matter) appear to make something of a cultural fetish of the object level. Instead of also applying rationality to the higher simulacra levels - how people think, and how people wish to influence the ways others think, and so on - they make it their mission to oppose and reduce the chaos produced by these considerations. Julia Galef's book is pretty much about that.

Gardner seems to be against this project of narrowing down discourse toward the object level, deeming it both impossible and potentially harmful in the process. At the very least, the project shouldn't cloak itself with the purity of mere reasoning. The moment you have a mission, you're playing the game.

(The fact that all the simulacra levels do eventually add up on the object level in the end is both true and mostly irrelevant - about as useful as pointing out all the hormones and gene expression to a man in the throes of love.)

Expand full comment

#1 Don't knock drug-fueled orgies until you've tried them. The risk/reward ratio is probably favorable compared to many other edgy-but-socially-acceptable activities like general aviation.

#2 When a guy jumps out of the bushes (or in my case, the front seat of a cab in Panama City) and sticks a gun in your face, you have a remarkable amount of time to perform rational analysis. Adrenaline is a magic time dilator. In my case, after what seemed like an eternity of internal debate (but was actually half a second) in which I contemplated a half-dozen courses of action, the conclusion was "jump out of the moving cab". The comedown was harsh though. Adrenaline is one hell of a drug.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I've never been a big fan of the term "rationality" for a lot of the reasons described in this post -- it seems to carry the connotation (warranted or not) of opposing intuition and heuristics even when those may be valuable. I appreciate the bit about "truth seeking" and "study of truth seeking", which I find to be much more clarifying language so I'll stick with that here rather than trying to say anything about rationality.

I tend to think of truth-seeking as a fundamentally very applied endeavor, and also one that can be taught without necessarily being systematized. Not just a purely intuitive knack, but not necessarily best served by a formal discipline studying truth-seeking either.

As an example, I think one good way to learn effective truth seeking is to study a specific scientific field. A major part of a scientific education is learning how to seek more accurate models of the world. One learns tools, from formal mathematics to unwritten cultural practices, which help to separate truth from seductive but incorrect ideas.

Then, there are of course also people who study how science is carried out (eg, Kuhn & other philosophers of science). Tellingly, most practicing scientists pay relatively little attention to this, other than as a form of navel-gazing curiosity.

Rather than rather than rocks vs geology, I think science vs study of science is a better analogy to truth-seeking vs study-of-truth-seeking. And as with science, I am skeptical that the study of truth-seeking has much to say that will actually improve the practice of truth-seeking, compared to engaging directly with the practice of truth-seeking. Though perhaps 2000 years of follow-up research will prove me wrong.

Expand full comment

Minor correction:

"Newcomb’s Paradox is a weird philosophical problem where (long story short) if you follow an irrational-seeming strategy you’ll consistently make $1 million, but if you follow what seem like rational rules you’ll consistently end up with nothing."

If you follow what seem like the rational rules, you'll consistently end up with just 1 thousand dollars.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Karl Popper's epistemology would be very useful here (it would always be useful, and it's almost never used!!)

In Popperian terms: effective truth seeking is basically just guessing plus error correction. Rationality is actively trying to correct errors in ideas, even very good ideas. Irrationality is assuming good idead don't have errors (or that errors exist but can't be corrected). Antirationality is actively preventing error correction.

A fictional example: an isolated island community has a sacred spring and a religious belief that no other water source can be used for drinking water. This is a useful belief because water from the spring is clean and safe, and many other water sources on the island have dangerous bacteria. One day, the spring dries up. Rationalists on the island try other water sources; some get sick in the process but they make progress overall. Irrationalists try prayers and spells to make the spring come back. Antirationalists declare the rationalists to be heretics and murder them all.

I think people who are "against" rationalism (and who aren't antirationalists like Khomeini) tend to be in the "good ideas have errors but it's vain/hubristic to think we can improve them" camp. Often trying to improve on established ideas leads you off of a local maximum (only drink from the sacred spring). But being trapped at a local maximum is bad, even if the path to a better point is treacherous. And external circumstances can make good ideas stop working (the spring drying up) anyway.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I feel like there's at least a semantic rhyme here with the idea in linguistics that, _by definition_, a native speaker of a given language has a kind of tacit "knowledge" of the rules of that language, even if they can't articulate those rules. Modern linguistics -- the kind that Pinker practices -- is the enterprise of first taking those rules and making them explicit, and then moving up one layer of abstraction higher, to try to understand why it is that certain kinds of rules exist, and other kinds do not. And the the flavor of psycholingustics that I studied in college tries to ground those questions about the rules in actual brain structures.

As a side note, I actually think Pinker is pretty deeply wrong about linguistics, and in a way that challenges his own claims to being uber-rational. The Johns Hopkins linguistics department is the home of "optimality theory", which posits that the "rules" of a language are actually like a set of neural nets for recognizing and generating patterns -- or, more to the point, they're _like_ a set of computational neural nets, because they are _actual networks of human neurons_. Once you adopt this frame, you can see how a given situation could result in different "rules" for generation giving you conflicting answers, and then you think about how different networks can tamp down the activity of other networks. Hence the concept of "optimality theory". The actual language produced by a given mind is the optimized output of those interacting rule networks. And we get linguistic drift as new minds end up having different balances between rules, and ultimately creating new ones.

I got to sit in on graduate seminars with both Chomsky and Pinker in my time at JHU, and while they're both clearly brilliant, they seemed committed to a purely algebraic, functional model in which rules operate deterministically, like clockwork. This seems to fly in the face of what we know about how neurons work -- it seems, dare I say it, irrational.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I think this explanation undervalues the extent to which having an explicit model of the world helps you develop better intuitions. For instance, the guy who just has a knack for geology may be able to better find diamonds than the geologist, but I bet if you find a kid with a knack for rocks and *teach them geology*, they'll be able to find diamonds better than either of them. Intuition is not a black box, the brain doesn't do intuition versus models; models feed from intuition, intuition feeds from models.

Expand full comment

I am neither a Rationalist nor an Anti-Rationalist, but if I wanted to make an Anti-Rationalist argument, it would probably go something like this: the world is not a computer.

That is, there may be elements of the world in which reality looks like a near-infinite series of computations constantly being solved and updated and reset; therefore the best way to make good decisions is to develop complementary systems that are really good at correctly solving those computations. But maybe that's just us projecting our internal understanding onto the world. Or maybe this rational world is objectively real, but is merely a very thin bubble encapsulating and underlying world that is better understood through some other attribute. Or perhaps that underlying world is ruled by pure chaos.

By way of analogy, we know that hunter-gatherer tribes sometimes use ritual behavior as a means of randomizing decision-making for things like where to hunt. In this example, the ritual might outperform other more rational methods that involve specific knowledge of the prey. Likewise, a random number generator might outperform the ritual. And a systemized understanding of animal grazing patterns might do better still. But why should it stop there? Maybe there are an infinite number of possible paradigm shifts that take us in and out of what we might consider rationality.

I guess you could respond that by saying that, whatever the next evolution of correct decision theory is, the Rationalists will be there. But that would imply that there are a lot of conceptions of rationality that only appear rational in hindsight.

I don't really believe this, but I think that I believe something like it.

Expand full comment

I'm never sure if we are talking about rationality as it actually exists, or rationality in the same way that a feminist says "If you don't literally think women should die in the streets you are a feminist" and then goes on to be the mishmash of 1000 extra rules, stretch goals and being contagiously unhappy all the time that everyone recognized feminism as before it stopped being a thing.

Because there's that level where, like, you ask someone what rationalism is and they mumble something about updating priors with a formula. But then there's the "what a feminist actually is, ignoring the dictionary definition for actual reality" version, as well. And like feminism it's not all one thing,

I don't know that everything "actual" rationalism is bad. Like almost everyone prominant in the space follows a rule that goes something like "Write an article that inescapably indicates that someone was lying, but refuse to say the word lying, or indicate the person did anything bad, and certainly don't try to use any of your influence to create an incentive for them to tell the truth".

But then every one of the same guys knows it's bad, because they spend endless hours trying their hardest to indicate they don't do that, and doing a bunch of work to be really exceptionally accurate beyond all expectations. And they'd say something like "well, there's no chance I could effect change, or we could change norms that way, society is really just a tide you can't resist, we function within it" but still then see stuff like prediction markets, notice it might help solve the problem with dishonesty without them ever having to say "lie", even though they want to use it as an over-complex lie detector in a very literal way.

And in that case all the "well, you can't change society, you have to function in it" concerns disappear and there's a ton of faith that a bunch of journalists with no incentives to use the lie detector because virtually everyone with a masters-or-higher education thinks it's gauche to discourage falsehood will suddenly use the machine anyway, against their own best interests, since otherwise they can just ignore it forever, go on lying, and nobody who wouldn't have been confrontational enough to call them on it then will call them on it now.

And then you spread that kind of thing out, and you find out it's a group of people who are really, really proud about their ability to seem reasonable in forums and who use the whole thing for virtually nothing else. They all still would have been software engineers, would have had essentially the same (or less) social problems, etc. Would have voted the same way, been influenced by the same social factors.

And sometimes that ability to talk really well in forums itself does cause benefits - say, a bunch of start-up tribe people chucking money at them, which then gets donated, which is probably great. And maybe the way that money gets invested is different than how any other "current movement the rich read to feel like they are keyed into thought leaders before going on and continuing to act pretty much the same way" group would have invested it, but maybe it's not; hard to say.

All that to say, there are dozens of things like that wrapped up in rationality. And when someone on Twitter is bashing on rationality, it's usually not the reasonable "Sometimes I write stuff down and try to think of it mathematically" bit. It's going to usually be the "these are a bunch of guys who talk a certain way in forums, with very few exceptions accomplish very little, and despite their nothing-but-good-forum-game status act a lot like they are pretty substantially the best thing ever, even though they are mostly just normal software engineers" angle.

Expand full comment

>>>"Democritus figured out what matter was made of in 400 BC, and it didn’t help a single person do a single useful thing with matter for the next 2000 years of followup research, and then you got the atomic bomb (I may be skipping over all of chemistry, sorry)."

Not all of chemistry, which was pretty much trial and error (which I would call rationality use) in the production of dyes, paints, and ceramics. Trial and error produced heuristics and traditional rules until we got enough data to get isolated concentrations of specific materials in order to be accurate about conclusions drawn about those materials.

Rationality depends on an assumption about the quality and quantity of data known about a topic. Using Rationality when the data is sketchy or fragmented won't yield good results, and it's better to fall back on traditional things.

Expand full comment

Assuming that the book you are talking about is _Rationality: what it is, why it seems scarce, and why it matters_, I read it late last year. This thread didn't seem to resemble the book very closely, so I dug up the review I wrote at the time.

This book reads like an _apologia_ for the rationalist movement, except that the author doesn't refer to it by that name. He's obviously aware of it, and even cites one or more well known members. But the book carefully uses "rationality" in place of "rationalism"; I wound up half convinced Pinker made this change to make it less likely for his readers to find some of the nuttier rationalist pronouncements with a simple google search.

A good chunk of the book was devoted to presenting heuristics useful for getting better results - nothing unfamiliar to anyone involved with rationalism, but presented in a way that couldn't possibly be seen as either cult-like, or extreme. Weird jargon like "motte and bailey" was omitted, as were grandiose claims about what rationality can do.

But the thing about apologetics, is that they are intended to persuade readers - that thing really isn't so bad; the people involved are reasonable; various every day objections are simply wrong. I'm more familiar with them in a theological context - particularly in the context of attempts to convince educated, high status Romans that Christians were neither nutters nor uneducated losers, and their beliefs were compatible with (pagan) philosophy.

And they don't include discussion of cases where the thing being defended is inapplicable, useless, or worse.

IMHO, Pinker only really defends the almost tautologically valid core of rationalism. Not "systematized winning". Not formal study (of anything). And certainly not some bizarre aspiration to reason out everything one does.

Expand full comment

“Even in whatever dystopian world they created, people would still use rationality to make cases.”

That is a very optimistic viewpoint, there are plenty of places in the world where people make choices based on superstition, wishful thinking, fatalism or reading Twitter. An anti-rationalist dystopia would be a place where that sort of thinking is strongly encouraged.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

This is a very interesting essay, and well written. A pleasure to read.

I do think you've entirely missed the argument (or more precisely fear) of the "non-rationalist" side of the debate, however. It's not that "rationalism" -- arguendo the construction of conscious logical chains of deduction -- is not as good at discovering the way to become rich and happy as "listening to instinct." It's that conscious reasoning presents the *unique* danger of constructing outright evil. We are not, for example, born with any instinct to round up Jews by the hundreds of thousands and gas them. That's a thing to which people have to arrive by some very sick and twisted process of heavily conscious rationalization, which generally speaking *violates* a whole lot of base instinct and intuition -- which is why it has to be buried under layers and layers of obfuscation, lies, self-deception, wilful ignorance, and, alas, Jesuitical rationalization.

Same with building nuclear weapons and using them on a city full of kids. Same with building a chemical or nuclear power plant, or new airplane model, while knowingly cutting corners on safety in a way that leads to disaster. Same with setting up a secret police or gulag, and saying things like "the death of a million is a statistic." Generally speaking, when we humans construct the largest-scale moral evils, a major enabling aspect is deploying rationalization on an epic scale -- everything from Aryan race science to the theory of jihad and the 50 virgins to an aging KGB thug muttering "Einkreisung!" to justify shelling kindergartens.

So when the "anti-rationalists" are cautioning against relying on a "calculational" kind of conscious thinking, I think *that* is the real bugbear: they see it as all to easily allowing a person (or demographic) to mistake rationalizing for rationality, and rationalize themselves (or all of us) into some dark and evil place. If the worst that could happen with rational argument is that you got obvious nonsense instead of the right answer, that would be one thing, and pretty mild, but history suggests the worst is much worse than that -- that you can talk yourself (and others) into a deeply wrong answer. It seems difficult to "instinctively" do the same, when instinct goes awry it seems more likely to lead to failure, lack of progress, or at worst chaos. It rarely seems to lead to effective and organized evil -- it takes the conscious mind to pull that off.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Rationality, despite your writing that you're not Descartes, appears to just be a new birth of what rationality always was. The belief that reason is the chief source of knowledge which is to be generated by and tested against intellectual thought. Of course this sometimes intersects with empirical reality. Even Descartes did empirical experiments to verify his reasoning. But fundamentally it's obsessed with thought and ideas in a way that betrays its common roots. The simulation hypothesis, for example, is something very like Descartes' demon and which would notably be rejected by the more empirical philosophies.

Despite Yudowsky's claims that rationalism is systematized winning rationalism seems very focused on thought and very little on praxis. Though weirdly, this rationalist self-reflection doesn't seem to reach to actually engaging in self-reflective philosophy. This leads to a certain fuzzy blindness around what rationalism actually is, in my experience.

Expand full comment

Aren't you making this too complicated ? To the vast majority of people, "rationality" means being logical, cold, and pragmatic; and justifying every decision down to 3 decimal points, like Spock (or some evil robot). Rationality is explicitly opposed to "humanity", which means going with your gut, having hope when things seem bleakest, and persevering against all odds. That's pretty much it, everything else is just fancy wording.

Expand full comment

This is tangential to the point of the piece, but, to my mind, the better rejoinder to Professor Pinker’s tweet would be that Pinker purports to write a book defending rationality. But every single one of his arguments uses rationality—he offers reasons and evidence to justify his conclusions—thus presupposing the conclusion he sets out to defend. If I wrote a book purporting to prove the validity of astrology and every single one of my arguments contained unquestioned astrological premises, my argument in a circle would be no more or less effective.

This critique of Pinker is just as glib as Pinker’s own response, of course, but it is also (presumably—I haven’t read his book) just as accurate.

Expand full comment

I'm fairly sure that what jumped to my mind is a species of the first kind of dispute Scott posited.

It says this: rationality (computing) might be useful, but in fact, in very many situations in life, it is confounded by the fact that we don't know the necessary underlying facts; and even more confounded because we *think* that we do know those facts.

In particular, the kinds of facts are things like "what would make me happy" and "what would make her life better". Also lots of complicated things about how the physical world works (think covid vaccines) and how institutions work (the economy); but in particular, people don't get themselves or other people in such a fundamental way that it's really pointless trying to use your own expected outcomes as a base for calculation. It would be better using (fill in your favourite heuristic or combination thereof).

I'm not sure to what extent I believe this. But I do feel like I'm 40 and while I know quite a lot of short-term things about myself (I like beer, I don't like weird flavours of crisps), I'm painfully aware that I don't know an awful lot of stuff (I also like tea, but would altering my current ratio of beer to tea make a +ve/-ve/no difference?). And this applies to... everything, big things and small things. Children, marriage, career, place of residence, etc., etc. So what would I even use as the basis for any "rational" calculations?

That said, I agree with Scott's position at the end of the post. So meh, I dunno.

Expand full comment

What does the picture associated with this post mean?

Expand full comment

This “ that’s confusing money-making with the study of money-making. These two things might be correlated - I assume knowing things about supply and demand helps when starting a company, and Keynes did in fact make bank - but they’re not exactly the same. Likewise, I don’t think the best superforecasters are always the people with the most insight into rationality - they might be best at truth-seeking, but not necessarily at studying truth-seeking.” seems true - art critics or food critics definitely not the best artists or cooks; seems probably extends over many domains, and only occasionally are people both (some writers are good critics as well as novelists).

Expand full comment

I think that acting rationally isn't necessarily based on logic but on what's valid in a given environment. If the heuristic "buy the brand you know" makes for sufficiently good outcomes for you, then logic isn't required in choosing what kind of detergent you should buy. It's kind of the thesis that Gerd Gigerenzer uses to argue against (ir)rational biases formalized by Tversky and Kahnemann. In short, most of what's deemed irrational (i.e choosing the option with less gain just to avoid potential losses), as per the definition of Tversky and Kahnemann, is actually quite rational if you consider that we're agents in a real world in which rationality doesn't always lead to best outcomes: rationality would tell you to be an atheist, for instance, because there's no evidence of God. Such conclusion would get put a stake and burned alive in 14th century Spain (or wherever and whenever they did this)). In that scenario, you'd probably best continue acting "irrationally" to save your skin, find a mate, and propagate your genes.

Also, most debates about rationality - and about anything else, really - are about what values we place higher in the hierarchy of morality. What I think Pinker might be doing (without reading the book) is simply placing rationality as the crown jewel of human achievement and the supreme moral value we all ought to strive toward. Gardner here seems to respond in kind by saying that respect, religion, and relationships are, well, important too. To me, this kind of discussion is kind of pointless because you can never sufficiently prove that any one of those values (I say values because that's what they ultimately are represented as in human brain) is better than the other.

Expand full comment

Couldn't you just define Rationality as a class of truth-seeking with a strong emphasis on heuristics for actively checking for logical consistency and cognitive biases?

On a separate note, I wouldn't credit Democritus with anything like scientific thinking. His (or rather, Leucippus') contribution was all conjecture and no road to verification/falsifiability. So Democritus goes into the "intuition" box as far as I'm concerned.

Expand full comment

> Surely a generic study of truth-seeking would be unbiased between the two, at least until it did the experiments?

Or until it decided which was better by flipping a coin, or by intuition, or by guessing based on what an immediately salient-feeling sample of successful people do, or…

Expand full comment

Re communism, wokism and other bold interventions: Scott Aaronson notes that it is important to notice where the idea lacks the guardrails, like "don't just kill (or rob or cancel) people even if you are convinced they are bad." A Chesterton's fence of sorts.

Expand full comment

I feel I should note here that while prediction markets may have been devised by means of explicit computation, they themselves are not doing explicit computation; they can't be interrogated about "why", and that's actually one of the big problems a lot of people have with them. They are *precisely* this:

>You’ve been magically gifted the correct answer, but not in a way you can replicate at scale or build upon.

As a separate matter:

>“diamonds are found in areas where deeper crust has been thrust to the surface, which can be recognized by such-and-such features”

Not "deeper crust". Mantle. Diamonds are found in mantle-derived material.

Expand full comment

Food for thought in this blog post.

I might very well be dim, but I do not get what argument Scott is driving at in this paragraph. It is the last sentence that puzzles me:

“….Gardner is making the same sort of claim as “wise women do better than Hippocratic doctors”. It’s a potentially true claim, but making it brings you into the realm of science. If someone actually made the wise women claim, lots of people would suggest randomized controlled trials to see if it was true. Gardner isn’t actually recommending this, but he’s adopting the same sort of scientific posture he’d adopt if he was, and Pinker is picking up on this and saying “Aha, but you know who’s scientific? Those Hippocratic doctors! Checkmate!”

…Eh, the Hippocratic doctors were perhaps using “science”, sort-of, in the Aristotelian (not the modern/RCT) sense. Anyway, they were dead wrong in their theory of the humors & benefits of blood letting & everything. So why would Pinker want to hail them/be able to use them as “checkmate” against Gardner? Since if anything, they confirm Gardner’s point/criticism of “science”?

Is there some subtle humor I am missing, or are there mistakes here?

Expand full comment

The first part of this post, about heuristics, mirrors the evolution of utilitarianism from Bentham to Mill.

Where Bentham said to always take the action that maximizes utility, Mill refined this in his book “Utilitarianism” to allow rules (heuristics) that are generally true and that generally increase utility.

The result of this is likewise the “happy side effect” that it becomes seemingly almost impossible to argue against utilitarianism, correctly defined, because any proposed heuristic can be subsumed within utilitarianism. (Though it probably is *possible* to argue against at a higher level, if you just don’t care about maximizing worldly happiness or fulfillment at all, and have adopted a heuristic so strongly that you now only care about pleasing the Sun God or whomever, worldly effects be damned.)

But anyway, the connection here raises the question: how much or the rationality debate is really about *utilitarianism*? I think, potentially, a lot of it.

Expand full comment

Keynes could advise the government on economic matters and *then* pick the stocks that would go up or down? What a... capitalist genius, yes, let's go with that.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Trying to decide whether relationships and respect are more important than rationality is like, say, deciding whether direct fuel injection is more important than strawberry yogurt. Stupid argument leading to meaningless discussion. Dare I say it? - it's an irrational argument!

Expand full comment

Nitpick: on that occasion Ramanujan wasn't solving a previously-unsolvable problem, he was solving a problem (I think in a newspaper column or something) that mathematicians already knew very well how to solve pretty routinely. Lots of mathematicians (including, e.g., me, and I am many levels below Ramanujan's) would look at the problem and very quickly see that the answer would be given by a continued fraction. What was remarkable about Ramanujan in this case is that he got to the _right_ continued fraction instantaneously and without conscious intervention, rather than having to scribble on paper for somewhere between one and twenty minutes to get there.

Expand full comment

There's probably some definition headroom to be found around "rationality is a concern with / attentiveness to the boundaries of heuristics."

After all most of what we do is the application of heuristics -- even doing matrix multiplication entails choices of computational precision and numerical representation which are fundamentally heuristic. The question is not "can we avoid heuristics" or "can we check our heuristics" (they're not very heuristic if you have to run the computation anyway!) but "how much do we actually know about the heuristics we are using?"

Expand full comment

Minor fact check: Ramanujan was not solving a "previously unsolvable math problem"; he was solving a brain-teaser set as a puzzle in Strand magazine. (And it's Srinivasa, without the 'n'.)

Expand full comment

Defining rationality always gets really philosophical and either super vague, subjective or both.

It feels like most arguments on rationality today are rather behavioral, about when and how much we should rely on heuristics.

Would be interesting to instead define what it means to be a "rationalist".

Some ideas:

- Those who spending an **above average** amount of energy to form, challenge and update heuristics.

- Those who want to understand things on a conscious level, such that it can be shared and understood by others.

- Those who simply **enjoy** trying to explain every aspect of and decision in life as logically as possible.

To the rationalists on here, what do you identify with the most or how would you define it yourself?

Expand full comment

> Democritus figured out what matter was made of in 400 BC.

It has been historically difficult for me to agree with this, especially after reading Eliezer's "Making Beliefs Pay Rent (in Anticipated Experiences)". To quote it, "When you argue a seemingly factual question, always keep in mind which difference of anticipation [of sensory experience] you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network". When they said "everything is made of atoms" in XVII century, if they were asked "what do you mean", they could answer "well, volumes of gases react in proportions of small integers", and that would be something. But in 400 BC, what did they have? Thought experiments? So when anyone says that Democritus actually knew that the world is made of atoms, I mostly feel confused. Moreover, I don't even agree that a modern person who knows the phrase "the world is made of atoms" necessarily knows that the world is made of atoms, unless they have some ideas of what sensory experiences this assertions is connected with.

The best explanation for this confusion that I have is that the word "knowledge" is very broad in English. Unlike e.g. Greek where they have episteme, metis, gnosis, prognosis, aletheia, mathema, dogma, doxa, theoria, and so on and so on. So what I think is happening here is that reading rationalist texts has shifted my default understanding of knowledge towards more sensory-experience-oriented one. So, what I am arguing for is for using the default word "knowledge" in at least a little bit sensory-experience-oriented way, and the true solution would be to borrow those Greek terms into rationalist discourse (I have seen this done with Episteme and Metis, in context of "Seeing like a State" review, but not other ones). After all, if rationality movement is about studying knowledge and truth, then having a fine-grained vocabulary about the subject matter would help and clear a lot of confusions.

Expand full comment

Rationality is about changing your mind in the face of greater evidence. Anti-rationality is about favouring your community or social connections over being right.

Anti-rationalists make terrible arguments but they are not necessarily being irrational, just that they are not *Rationalists*. They have different priorities.

Expand full comment

The actual debate is about whether people who have "rationality" on their banner should be treated as high-status or low-status. (This is a reasonable default for *all* philosophical debates, by the way.)

The problem with heuristics is that it is perhaps more *difficult* for a human to actually follow them if you believe that they are "mere heuristics". You may verbally approve of the idea that "sometimes it is better to follow your instinct than to try making an explicit calculation", and then the real situation comes and you... actually do neither, but instead you use a cached result of some calculation that you or someone else did years ago, when you didn't even have all the knowledge that you have now.

When the right moment comes to use the heuristic, you will most like fail to notice it, and you will miss the opportunity... unless you already have a habit of using the heuristic all the time, in which case you will use the heuristic even without noticing anything. But having such habit (even in situations that don't require it, because how else could it become a habit, right?), that is what we typically call an irrational behavior, don't we?

So on one hand we verbally approve of using the right heuristics when necessary, and on the other hand we eliminate our habits of using them. When the situation actually comes, we are often caught unaware... and the observers are facepalming, because this is actually a quite predictable outcome.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Is it possible that they are just being moral anti-realists, and saying, “these rationality people seem to think one thing is clearly better than another, and we all know how THAT goes”?

“Systematized winning” sounds a lot like “systematized good”, except we use the word “win” instead of “good” because we all intuit it’s not good to believe good means anything real or concrete. So we have this second-order value system which says, “whatever good is to you, clearly you want to approach it reliably and routinely rather than just at random, and here’s how to do it.”

But it’s still a value system, and we all know how people feel about any group that has the gall to say “we have the correct value system.”

Expand full comment

I'm curious why you went with "study of truth seeking" vs "study of winning" in the last section. Are you equivocating between the two? Do you think that if truth seeking and winning were at odds the rational thing would be to pick the first and not the second?

Expand full comment

'One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.'

I'm a little bit confused by this. Were you trying to joke, being mean towards your outgroup and steelman their position at the same time? That was really stressfull for my autistic brain, please don't do it again.

Anyway I guess a propper representation of this idea in question goes like this:

When people talk about stuff they always let their unconscious biases slip through and talks about rationality are no exception. Our own ability to reason is probably a result of status games of our ancestors. We are doing rationality on a compromised hard and software. People who develloped the ideas of rationality were mostly rich white men. We need to be extremely vigilant in order not to let whatever related biases sliped in the discourse deceive us. In practise people often use "rationality" as attire like in Newcomb problem. They claim to be rational or sceptical, but actually they are making very stupid mistakes. And such people are usually white man because the attire of rationality appeals to them more. Be careful, make sure to not fall in this trap.

Expand full comment

I don't think you've quite got the Social Justice line of thought. It's not just that white men put their thumb on the scale because they have power, it's that they have a long history of putting their thumb on the scale, so that, by some strange coincidence, their arguments end up proving that they're superior and should be in charge.

I also believe Social Justice is about getting power, so it's very convenient to have a tool for saying "Shut up and stop arguing with me".

There's probably a good topic in how you distinguish between things that have the trappings of rationality vs. actual good arguments.

***

In re respecting tradition: How do you decide when you see that traditions conflict?

Expand full comment

My definition:

Rationality is the union of [epistemic rationality] and [instrumental rationality]. Epistemic rationality is about avoiding biases, where a bias is anything that prevents you from forming accurate beliefs out of the information you've received*. Instrumental rationality is about achieving your goals.

* so if e.g. you happen to receive a highly unrepresentative sample by chance and form an inaccurate belief because of that, this is not a bias.

Expand full comment

I think the “anti-rationalists” are arguing exactly that some people (“rationalists”) rely too much on explicit reasoning and and not enough on heuristics and intuition. Of course not that Pinker or anyone else is _categorically_ against intuition/heuristics, but that they’re getting the balance wrong.

I don’t know if they’re taking a strong stand on how you ought to reason about when to use explicit reasoning or not in the meta sense.

I think maybe also there’s a related (or identical?) claim about making category errors in mixing up wise women’s herbs and bloodletting. Like if you asked someone 100s of years ago “how should we advance the theory of medicine so we’re good at it 100s of years from now?” the answer is “giving the bloodletting theorists bodies to practice on and study”, but if the question is “help my son is sick” the right answer is “oh then take him to a wise woman”.

Expand full comment

I believe a lot of confusion about rationality comes from mixing up truth and outcome. For instance:

"It's rational to not believe in God - there is completely insufficient evidence, and you shouldn't believe in things like that without evidence." In this case, you improve your chance of being epistemologically correct.

"It's rational to believe in God - the Inquisition will burn anyone who doesn't, and your risk of slipping up is much smaller if you actually believe in God than if you merely pretend." Here the epistemological correctness is _completely_ beside the point, because you're only aiming at the outcome of not being burned at the stake.

And the situations can, of course, co-exist. Before you even start to talk about what's rational, you need to know what you mean by the word (after all, this is the foundation of all of analytical philosophy)

Expand full comment

This might point at the problem of what do you do if you're surrounded by a lot of people who are smarter than you are. I'm not talking about the viewpoint of people who are pretty smart but know they're not the smartest at various things.

Let's say (to use approximate language) that we're talking about people with IQ below 110 or 100, someone who knows they're vulnerable to grifters and fast talkers. I think there's a quote from Oliver Wendell Homes (not an especially stupid person) about not trusting logic because trusting logic means putting himself at the mercy of anyone who's smarter than he is.

Expand full comment

I sense the tension between Rationality as computation and Rationality as winning/truth/being correct.

What's interesting to me is that they are both themselves heuristics. One heuristic says that careful methodical thinking will produce the best results, and points to the Scientific Revolution as proof (even if only on a long term horizon, as Scott mentions). The other heuristic short circuits the method and seeks the goal. I think this second heuristic is more interesting, because it recognizes a potentially fatal flaw in Rationality/rationality, and instead of stopping to figure out how to fix the flaw, it's willing to jump to the actual purpose of being rational - better outcomes.

The tension exists because the underlying philosophies are not compatible. Rationality requires reasons and understanding, so skipping to the best solution without understanding it really breaks the purpose. But, Rationality is not an end unto itself, it's a means to a different end, which is a better outcome. If you can get to a better outcome by reading the bones, then there's a very rational reason to read the bones instead of trying to figure out why something may or may not work.

Related, I think the reason computation breaks down is because the various inputs and factors are unknown and possibly unknowable. I think this flaw may be fatal for Rationality, at least the computational side and in regards to studying people. Judging by the fact that there are Rationalists willing to skip from figuring things out to doing what works, I think they realize the same thing.

Expand full comment

When I read things like this, I always think of D. Kahneman and "Thinking, Fast and Slow". We think both ways, and thinking slow is perhaps most useful when we think there is something wrong with the fast mode. Also thinking slow (rational) doesn't always get to the answer because there is all sorts of 'churn' going on in the brain below the conscious level. ie. You figure out some problem in your sleep and the answer is revealed to you during the morning shower. I don't think that is thinking fast or slow... maybe call it thinking deep.

Expand full comment

I think many "rationality" vs. "anti-rationality" arguments are actually philosophical conflicts, clashes of implicit metaphysics. You cannot define and practice rationality a priori from a certain worldview. "Rationalists" are not just people who study explicit reasoning and try to win stuff; they're generally people who share a common belief package: they assume that the world follows universal and legible laws (naturalism), that science can shed light on those laws (scientism), that the world is rationally knowable, that humans can and should determine their destiny through their powers of reasoning (humanism?). It is very hard for us who live within this worldview to understand somebody who carries a totally different package.

Incidentally, these beliefs and values emerged with the first "Rationalist" philosophers in early Modernity and then flourished during the Enlightenment. We know very well that most cultures, for most of their history, did not share this WEIRD way of looking at the world. I believe these thinkers called themselves rationalists, not because they were the first to discover the principle of "we want to be right about stuff and effective in our actions"; but because they were conscious of going against tradition and religion by emphasizing certain beliefs (the world is rationally knowable) and values (humans must carve their own destiny).

Imagine for a minute that you live in a world which is fundamentally unknowable; or perhaps knowing stuff enmeshes you in a veil of Maya that distracts you from True Being; or the world is ruled by a God who likes to punish us for knowing too much; or calculating knowledge de-humanizes you and corrupts your soul; or science is false knowledge, it only gives you the illusion of knowing, and therefore it is worse than ignorance; or the world is fundamentally made of mind-stuff, so if you want to understand it, you need to develop your empathy and explore your emotions... In many of these worlds, the reasonable course of action would be to do more or less the opposite of what people normally dubbed as "rationalists" do; but then you wouldn't necessarily say "I have discovered true rationality; I am an actual rationalist". You might say something like "I follow the Scripture", or "I respect the law of the Elders", or "I live in despairing terror of the Great Old Ones". And besides this, you might also choose to define yourself as "anti-rational" to emphasize your opposition to "rationalists" in values and worldview.

In this framework, anti-rationalists are not rebelling against the generic idea of "we want to be right and effective". They're more or less saying: We reject the specific ways in which you try to be right and effective because it conflicts with our values and worldview. From our point of view, what people who call themselves "rationalists" are doing is one or more of: immoral, counter-productive, pointless.

Expand full comment

Woo-hoo, a chance for me to leap in with ill-informed and ignorant opinionating!

Regarding Steven Pinker, I have never read anything of his, and the more I read around/about him, the less inclined I feel to do so (the two history blogs I follow, when they mentioned him, did so in a tone of "oh, *that* chap" and the impression I got was of someone who breezed into a field he did not know much about to make over-confident pronouncements about how things went).

That tweet of his makes me want to slap him in the face with a wet haddock. Because of course, if Gardner were a true sceptic of rationality and did not use the tools of rationality, his critique would have gone along the lines of "Ghoti! Wibble? *sawing noises* *blocks of pure colour* Whee! Gibba-gabbo-goo! *five hours of the shipping forecast* https://www.youtube.com/watch?v=CxHa5KaMBcM"

Honestly, there Pinker reminds me of nothing so much as "Mister Gotcha" in panel four here:

https://knowyourmeme.com/photos/1259257-we-should-improve-society-somewhat

I should hope we all use the tools of rationality (small "r") but I think the problem is that the discussion often veers off to Rationality (capital "R") and that is - well, what exactly is it? The version promulgated by Yudkowsky? A cult of the worship of Bayes?

Scott is correct in that nobody, when meeting somebody new who might become a friend, sits down and runs through a fifteen-stage checklist to decide whether or not they like that person. But Pinker and others can come across sounding very smug in the "I am the Only Smart Person here" way when talking about Rationality, and they do sometimes come across as "Well obviously you only make *every single decision* in your life after running a fifteen-stage checklist, otherwise you're one of the clods that paint their backside blue and believe in sky fairies". The Straw Vulcan version of a rationalist, if you like.

Pinker has a valid point that you can't undo something by using that very thing itself. But Gardner has a point that capital-R "Rationality" isn't something with a clear definition that everyone agrees on, and for the majority purposes we need to make decisions, we rely on other things.

"Fine, but I need fifteen people to bond super-quickly in the midst of very high stress while also maintaining good mental health, also five of them are dating each other and yes I know that’s an odd number it’s a long story, and one of them is secretly a traitor which is universal knowledge but not common knowledge, can you give me a tradition to help with this? “Um, the ancients never ran into that particular problem”."

Very likely they did, as there is nothing new under the sun, human nature has not changed that much, and moderns did not invent sex and romance much as they might like to think they did so.

Get them all to work on a common problem or put them into something like a sports team or a dance team. There's a Chinese light entertainment show which takes 200 dancers from all backgrounds and after several rounds whittles them down to a team of 10 who go from competing against each other for places to being ride-or-die for the team and each other:

https://www.youtube.com/watch?v=Ajh70XflhaY

Expand full comment

It's probably worth noting that the sense in which you are using rationality is ahistorical. Rationality used to be a more well-defined concept, which was distinct from other truth-seeking methods such as empiricism: see https://plato.stanford.edu/entries/rationalism-empiricism/

In this conception, the main thing that makes rationalism special is that you try to ground your worldview in pure thought - as opposed to empiricism, where you do messy experiments. Hence the strange obsession with rational numbers, or Descartes' idiotic "I think therefore I am", or attempts to build perfectly "logical" languages, or "expert systems" as a path to artificial intelligence, or computer science and mathematics would all fall under traditional "rationality". Biology, or chemistry, or "neural networks" as a path to artificial intelligence might fit better under the umbrella of "empiricism".

Today the sharp distinction between rationalism and empiricism is no longer justified: there are fully rational, mathematical models of chaos and randomness, and we have mathematical models of how neural networks might work. We have computer science explanations for why ab initio chemistry is more difficult than simply doing experiments (it comes down to quantum mechanics being difficult to simulate on a classical computer).

Still, there seem to be underlying personality traits that favor some people to think in a more rationalist way vs a more empirical way (using the old definitions of these words). MIRI, for instance, seems to clearly fall into the traditional rationalism bucket, while DeepMind seems to fall into the traditional empiricism bucket. It's probably worth studying this phenomenon (empirically).

Expand full comment

How likely is it that we're secretly arguing about utilitarianism? I would expect the "everything is commensurable" of utilitarianism to be intuitively repulsive to a lot of people, and that feels like at least a decent proxy for where you'd want to stage the battle.

Expand full comment

Since you mentioned Democritus - he was actually terrible on this question. His eliminativist reductionism also eliminated the possibility of knowledge or rationality. Perceiving a tree, he would say things like "there isn't a tree, the tree is an illusion, in truth there is only atoms and the void!" He was actively anti-science, anti-analysis, and anti-rationality, since he thought the atomistic nature of the universe made knowledge impossible.

Also, his "atoms" have their closest modern-scientific parallel not in our "atoms", but in biological proteins, which actually do interact mechanically based on their shape in the way he described.

Expand full comment

I have read neither Pinker's book nor Gardner's critique but my general hunch about why "anti-rationalists" are "anti-rationality" are:

1. They don't consider it useful/important to explicitly try to improve one's reasoning skills, ability to win systematically and/or don't see that much value in studying cognitive biases etc.

2. They just have a mistaken implicit model of rationalists as these-people-who-are-all-for-reason-but-neglect-importance-of-respect-emotion-social-relationships-sth-sth and in general have a habit of pointing to the skulls that have already been noticed and updated from.

Expand full comment

I think the difference is what each would consider a virtue.

"Rationalists" consider explicit reasoning to be a virtue like courage or kindness. It is an unfortunate reality that one cannot always use explicit reasoning due to time constraints, but in an ideal world there would be enough time for it all the time.

"Anti-Rationalists" consider explicit reasoning as a tool to be used when it is appropriate, but in an ideal world one would never have to do it.

Expand full comment

My partner and I had a debate a few months ago on how to tell a dog from a cat. The criteria we settled on was that cats have thick whiskers long enough that if they were stretched out straight the distance between the whisker tips that were farthest apart would equal the width of the widest point of their bodies (which they use to determine if they can fit through holes: if their head fits without their whiskers touching anything, then the rest of their body will too), while dogs don't.

That and the vertical pupils I guess.

Expand full comment

“But again, I would be shocked if Pinker or other rationalists actually believed this - if he thought it was a productive use of his time to beat one of those cat/dog recognition AIs with a sledgehammer shouting “Noooooooooo, only use easily legible math that can be summed up in human-comprehensible terms!”

I dunno if Pinker considers it a productive use of his time, but I do believe he still believes that systems based on statistical learning are inferior and cannot understand a problem of study - I’m thinking of his paper from the 90s arguing that neural networks can’t learn the past tense of English verbs.

Expand full comment

I've seen this framed as two types of knowledge, which I call "evolved" and "epistemic". Evolved knowledge has the advantage of likely being useful at the point it was developed (or else it would be selected against - thus "evolved"), but the disadvantage that it is unlikely to be true and changes slowly. Epistemic knowledge tends to be true, but is often not useful (until, as you point out, it suddenly is).

Expand full comment

I think this would benefit from considering sophistry, which I would say is the actual anti-rationality. Everybody knows that common sense, intuition and heuristics can be manipulated to make dangerous or evil deeds seem wise and good. That's what propaganda is. Rationality has a good reputation because critical, logical, naturalistic practices can put your thinking on a solid footing, which helps you resist propaganda.

But once you're reliant on rationality to defend you from propaganda corrupting your heuristics, you're now vulnerable to sophistry, which is partial or corrupted rationality used in the service of propaganda. Logic is vulnerable to fallacies, (forgetting to carry the one); critical thought is vulnerable to conspiracy bloat; naturalism is vulnerable to political capture.

To a rationalist on the defense against sophistry, "rationality extends thus far, but doesn't cover x situation" seems like the thin edge of the wedge; somebody is trying to corrupt you with fallacy or conspiracy or politics. And once they've done that, you're vulnerable to manipulation by propaganda again.

Expand full comment

I think you are kind of right that individual rationality is kind of meaningless, and the real rationality is the social or institutional rationality of, for example, Robin Hanson’s “Rationality as Rules” (https://www.overcomingbias.com/2022/02/rationality-as-rules.html), or Jonathan Rauch’s “Constitution of Knowledge”, or Arnold Kling’s “social epistemology” (https://www.arnoldkling.com/blog/epistemology-as-a-social-process/) or “institutional rationality”, or the scientific method, which isn’t mostly an individual process but a set of norms for communal inquiry by which we build on others’ learnings.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Comments:

1. "One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

As a young cat, someone told me that logic and rationality were invented by cizhet white males as a tool to oppress minorities and women.

Ignoring for now the basic idea that the point of logic and rationality is that they work the same for everyone (which makes them an unreliable tool of patriarchal oppression), what's to stop white penis people from using irrational and illogical arguments? Most humans of whatever gender or color do so several times a day.

2. “Intuition” is a mystical-sounding word. Someone asks “How did you know to rush your son to the hospital when he looked completely well and said he felt fine?” “Oh, intuition”. Instead, think of intuition as how you tell a dog from a cat. If you try to explain it logically - “dogs are bigger than cats”, “dogs have floppy ears and cats have pointy ones” - I can easily show you a dog/cat pairing that violates the rule, and you will still easily tell the dog from the cat."

Didn't Diogenes cheeze off Plato that way? Plato was in his symposium, defining a "man" as a hairless, featherless biped.

Diogenes returned with a plucked chicken and shouted "behold, Plato's man!"

Expand full comment

"even though I bet all four of these people enjoy winning"

Do they though? Newcomb's problem highlights that a surprising number of philosophers don't enjoy winning, or enjoy some non-winning activity more than winning. Philosophers are all weirdos unrepresentative of the general population, but so is everyone involved in this argument. A goal other than winning seems like the simplest explanation of the people you describe as wanting to do whatever favours black women the most.

Expand full comment

"It still feels like there’s something that Pinker and Yudkowsky are more in favor of than Howard Gardner and Ayatollah Khameini, even though I bet all four of these people enjoy winning"

I don't know any of these people personally, and coming to it as an ignoramus, my interpretation of that is "Wow, this Gardner guy is on the same side as Khameini? He's like an Ayatollah? Well then of course Pinker is The Good Guy here!"

But being an ignoramus, I then go "Well, I kinda feel that may be unfair". I mean, if you read a sentence along the lines of "X and Y on one side, Pinker and Pol Pot on the other" wouldn't *you* feel that was a sneaky way of saying "Pinker is Bad Guy! Loves and supports wrong thinking!" (even though it might just be "how to cook rice" or something, instead of "genociding your own population is a great way to achieve your ends").

So, looking up Gardner, I see he's a developmental psychologist and proponent of the "multiple intelligences" model.

And that makes me (an ignoramus) think this is really a beef between "Only one high, holy and sacred measure of IQ (and that's how well you do on mathematical reasoning" versus "There isn't just IQ, there's other forms as well".

While it may be correct to think that "being a really talented athlete is not the same thing as being a Harvard psychology professor", I think "and the other Harvard psychology professor in this exchange of views is on the same page as a guy who thinks women should not ride bicycles" is a teeny bit unfair.

Expand full comment

I feel compelled for a moment to be pedantic about the final paragraph:

Economics is not the study of money-making.

Economics is the study of human decision-making under conditions of scarcity.

Expand full comment

I'm a game theorist in the social sciences, and we confront variants on this question all the time. The problem, of course, is that any behavior can be "rational" (in the game theoretic sense) if you assume the appropriate preferences, so I can always jam religion or whatever into a model by saying "well just assume massive negative utility for eating milk and meat together" or some such. In that case, there's nothing distinctive about rational choice theories.

In practice, what separates rationalist and non-rationalist theories of human behavior (and I know this isn't exactly what you're addressing above) is that rationalist theories attempt to explain behaviors in terms of a relatively small number of primitives. You posit that people have a limited number of underlying goals and then somehow optimize in pursuit of them.

What sets many non-rationalist accounts apart is that you can only reproduce them in a rationalist framework by assuming many primitives (especially if these arise via following evidently arbitrary rules). There's not a unifying framework that you can use to derive Leviticus from a few primitives; the only way to do is to posit hundreds of separate, independent rules (albeit joined up with the broader "do what God says" or similar).

If we turn that around in terms of the rationality movement, I think the same basic thing holds true. If you're trying to pursue some smallish set of underlying goals and you can define the utility of a given outcome in terms of that set of primitives, then you're on the rationalist pathway. And from there you can start optimizing (or satisficing). You'll use plenty of heuristics for the reasons expressed in the post, but you have a way of knowing whether or not they work in terms of the primitives.

In contrast, if you're engaged in irrational decision making (such as blind rulebook following), you have no primitives to fall back on. You can't say that keeping kosher is "working" or "not working" in terms of something else. It just is. The rules are an end in themselves, and so you can't be rational. There's nothing to optimize. You're just trying to follow rules/traditions/whatever. Your heuristics aren't shortcuts to something. They are the destination.

Expand full comment

It is hard to argue with anyone who argues just to be argumentative. And reasons are often just assertions, not reasonable in the least. Does not reason imply rationality? But questions do not necessarily imply that the questioner really seeks an answer. If one honestly wants an answer, his question is the second step in finding the answer, and might very well lead to more or more specific questions long before he has found the answer. Seek and you shall find. Ask and you shall receive. But, often, when you ask for something specific, say bread, you may receive a stone. But if you request wisdom or knowledge you are more likely to receive them. For both are to be found in a world that is build on such principles. And our minds, our reason, is built on the same rational, wise principles as everything else we observe. Although both the mind and the world are sometimes devious in asking and giving.

Expand full comment

Thanks a lot for the nice post, Scott. While I like what you say as a proposal for what it means to be a Rationalist, I can’t help but feel like Gardner opposes something different.



A theme in the original Star Trek series was that Spock was super-duper smart and rational, but Kirk and McCoy would try to get him to be more “human,” making some choices that “weren’t rational” but “were right.” For example, in the fourth Star Trek movie it’s a big “human” moment when Spock agrees the Enterprise should stay behind and rescue people, against tough odds, instead of fleeing to save the ship. I feel like Gardner — and many people I know — have this sort of perspective on “Rational” versus “Right.” They applaud Spock here for “realizing being rational isn’t always the right thing to do.”

I disagree with this framing, and I think the right argument against it will be more basic than your nuanced take. I.e. it won’t need to say “Well when Spock promotes rationality, he means thinking about how to think.” It’ll clear up some basic confusion, like “Rationality doesn’t mean you can’t listen to your emotions and how much you care about people.”

Expand full comment

> One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points...

OK, this is unfortunate. High-IQ white male speaking, let me try to unpack this a bit better. The knock here is that many people who claim the mantle of rationality are not in fact rational at all, they just like to dress up their own preferences, biases, and beliefs as "logical."

You might right now be racing to object -- correctly -- that this isn't a knock on rationality itself, this is a knock on the misappropriation of rationality. But that's the point that (I think) many so-called anti-rationalists are making. People can rationalize just about anything, and personally I have seen very little evidence that even the self-described rationalist community is much more than an affinity group for people with certain interests and political beliefs. To be clear, I don't think there's anything wrong with such an affinity group; I personally, as a high-IQ white male, share many of those interests and political beliefs. But when we start dubbing those affinities as "rational," with all that implies about people with different affinities, well, as the kids say, things get problematic.

To me, a truly rationalist approach to life would involve massively more epistemic humility than most people can muster. I think the confidence intervals on our beliefs -- including such articles of iron-clad faith like "communism is wrong and terrible" -- are much wider than we think. We then end up in a situation like the one William MacAskill explores at length in Moral Uncertainty.

Again, to be clear, small-r rationality is still core to the enterprise here, although certainly it is worth asking to what extent in practice moral behavior rests on rational calculation vs. emotion. But I don't blame anyone for being suspicious of whatever is being smuggled in under the name of big-R rationality.

Expand full comment

I think heuristics do arise out of rational underpinnings, and the problem with trying to define them is akin to what Chesterton describes (in his book "Orthodoxy" which puts his case as to how and why he believes in Christianity) below:

"It is very hard for a man to defend anything of which he is entirely convinced. It is comparatively easy when he is only partially convinced. He is partially convinced because he has found this or that proof of the thing, and he can expound it. But a man is not really convinced of a philosophic theory when he finds that something proves it. He is only really convinced when he finds that everything proves it. And the more converging reasons he finds pointing to this conviction, the more bewildered he is if asked suddenly to sum them up. Thus, if one asked an ordinary intelligent man, on the spur of the moment, “Why do you prefer civilization to savagery?” he would look wildly round at object after object, and would only be able to answer vaguely, “Why, there is that bookcase . . . and the coals in the coal-scuttle . . . and pianos . . . and policemen.” The whole case for civilization is that the case for it is complex. It has done so many things. But that very multiplicity of proof which ought to make reply overwhelming makes reply impossible."

Heuristics versus Rationality is the Rationalist (like Pinker) going "I have this lovely neat equation, what do *you* have?" and the Heuristician (is that a term?) looking around and going "Uh, well, there's the coal scuttle? And the table?"

It's then very easy for the Rationalist to laugh kindly at the Heuristics guy, but that laughter is misplaced. That quoted tweet does remind me of what Chesterton said about Matthew Arnold in "The Victorian Age in Literature":

"But Arnold kept a smile of heart-broken forbearance, as of the teacher in an idiot school, that was enormously insulting. One trick he often tried with success. If his opponent had said something foolish, like “the destiny of England is in the great heart of England,” Arnold would repeat the phrase again and again until it looked more foolish than it really was. Thus he recurs again and again to “the British College of Health in the New Road” till the reader wants to rush out and burn the place down. Arnold’s great error was that he sometimes thus wearied us of his own phrases, as well as of his enemies’."

Expand full comment

People argue preferences and assumptions, not "rationality".

"Given the choice, I will choose vanilla ice cream."

Can't use rationality for ice cream flavors? Why assume you can do it for human flourishing?

"Given the choice, I will choose 1 year of pleasure for 100 people."

"Given the choice, I will choose 100 years of pleasure for 1 person."

"Given the choice, I will choose a million years of misery for humanity"

"Given the choice, I will choose 100 years of pleasure for humanity."

Now lets use reason to decide on the goal of our society. What are the trade offs?

"How many years, making How many pleasure chemicals, among How many brains, of How many different Types?"

Expand full comment

( p and not-p ) implies q - the rest is commentary.

Expand full comment

The last line of this post reminded me of a hilarious article they made me read in officer training school. In at, a guy who spent his career studying rational decision-making just flatly claimed, without any attempt to justify, that the best way to improve your decision making was to study rational decision-making.

It was so funny on so many different levels.

Expand full comment

I think the popular objections to rationalism are based on a model that defines rationality as "putting more faith in your own judgments than in social judgments."

E.g: A rationalist with no experience or knowledge in medicine might look at the information that's been provided by authorities about vaccines and say "huh, I do not have enough information about this subject to make a coherent judgment on it. Therefore I will choose to rely on authority and get vaccinated."

Meanwhile, everyone else is going "you have to get vaccinated, doctors agree it's a good idea and if you don't people will mock you/be mad at you."

Those have the same practical endpoints but they're very different thought processes. One says "given my low confidence, it is my judgment that outsourcing this decision to authority is the correct move" and the other says "I don't care what I think is right, I am surrendering to the collective will." The first person could change their mind, the second could not.

When people talk about "straight white males" and reasoning, what they're saying isn't that "straight white males can put their thumbs on the computational scales and make other people think their power is good." They're saying "straight white males <something pseudo-psychological> and therefore they think they know everything. They don't understand that they should surrender their ability to reason to the collective will, especially since their motivated reasoning is likely to do evil because of <something pseudo-sociological>."

While I mostly see this philosophy as the Great Satan, I can steel-man it pretty easily: Even smart people are good at convincing themselves of false things that make them feel better. A ton of research indicates that we make choices for reasons that are sub-rational and then just make up a plausible-sounding reason why that choice was the right one. Even the act of saying "I am going to surrender to the popular will" on a question gives you the opportunity to motivatedly refuse to surrender to the popular will. You must pre-bind yourself to that surrender or it won't happen effectively, and then you'll use your straight white maleness to do evil, thinking it's good.

Expand full comment

One could empirically study the study of study by randomizing 3000 forecasters to read Yudkowsky, Tetlock, or Pinker, and then seeing which group makes the most money forecasting.

I think I already know the result would probably be Tetlock > Yudkowsky > Pinker. Tetlock's work is the most narrowly focused on winning at prediction markets. All the epistemic rationality in Yudkowsky could in theory be useful for winning in prediction markets, but might be harder to apply and require more inferential steps.

There's a level between the diamond-knack guy and the geologist, which is where the diamond-knack guy writes a book of aphorisms about how to find diamonds. He makes his knack transmissible without placing it in the context of a general theory. Diamond-knack guy's book is probably more useful for finding diamonds than a general intro geology textbook. But the latter contains a lot of important things you won't learn from the former.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

As an internet phenomenon and locus of discussion, rationalism (to me) seemed mainly concerned with pointing out how cognitive biases, motivated reasoning and complex social incentives get in the way of our stated objectives. Your article on "the toxoplasma of rage" has some good illustrations (https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/)

1) "PETA doesn’t shoot themselves in the foot because they’re stupid. They shoot themselves in the foot because they’re traveling up an incentive gradient that rewards them for doing so, even if it destroys their credibility."

2) "If campaigners against police brutality and racism were extremely responsible, and stuck to perfectly settled cases like Eric Garner, everybody would agree with them but nobody would talk about it. If instead they bring up a very controversial case like Michael Brown, everybody will talk about it, but they will catalyze their own opposition and make people start supporting the police more just to spite them. More foot-shooting."

At its core, I think we can safely define rationalism as a kind of applied epistemology concerned with revealing socially-transmitted cognitive errors and proposing tools to overcome them (Bayesian reasoning, registered predictions, ...).

Expand full comment

Rationalism may or may not have anything to do with human capacity but everything to do with the accessibility or understanding of nature. Is the world comprehensible or do dragon-beasts need to be invoked to cover certain problems?

The author here is incompetent and has confused "heuristic versus computation" with "prediction versus accommodation" respectively. There is an enormous amount of writing on this topic, which makes the author's choice to discuss Keynes ironic since it largely began with Keynes writing these following line:

" If a hypothesis is proposed a priori, this commonly means that there is some ground for it, arising out of our previous knowledge, apart from the purely inductive ground, and if such is the case the hypothesis is clearly stronger than one which reposes on inductive grounds only. But if it is merely a guess, the lucky fact of its preceding some or all of the cases which verify it adds nothing whatever to its value. It is the union of prior knowledge, with the inductive grounds which arise out of the immediate instances, that lends weight to any hypothesis, and not the occasion on which the hypothesis is first proposed."

Expand full comment

This post -- and really the project of this entire blog -- is an example of Missing the Point due to hubris, and probably also from an essentialized, chronic fear of losing control.

The pattern is to seek comfort and confirmation by relying on the active mind to create a small universe that provides a sense of power and security -- typically by setting up an illusory polarity, in this case between "heuristics" and "rationality" (as well as between two writers). This performative may be amusing, and the results of the game may feel nice and tidy, but it takes place within a finite playground -- which is not necessarily a problem that would prompt an outsider like me to comment, except that when the participants of such a game cling so vehemently to their forgetfulness that the borders of their playground are not the actually borders of the universe.

Especially when people are so intellectually capable as many are here, and when they so zealously team up to tighten the shared finger-trap of their creative architecture, it gets easier and easier for them to bracket all unknowns as abstractions to be casually dismissed -- even though these abstractions impact real lives. In effect, libertarian-style rationality is particularly appealing to those who generally think they've figured out how to "win" because of their tidy little constructs and structures that allow the superego to view itself expansively, as if concerned about the benefit of all, while actually remaining effectively blind to their true nature, which is fearful, contractive and essentially self-oriented.

So the problem has nothing to do with rationality. Rationality is just a tool. Like a triangle is strong shape. The problem is the defensive crusade to convince as many people as possible that this cozy little abstract world should be preserved at the cost of long-overdue humanistic expansion.

Doctors in today's libertarian world, stuck inside "rational" markets for care delivery (since libertarians so enjoy shooting down the possibility of any new system that might not conform to market-oriented heuristics), have become predominately statisticians. A rationalist might praise this as pragmatic, knowing that oddball patients will fall through the cracks, but content that a statistically efficient system will save the most people. But this would be a total failure of imagination, as the rationalist is left with scant emotional drive for asking questions that have nothing to do with statistics, such as whether sick people with oddball diseases or syndromes actually might have much to contribute to the world that is original and expansive.

Remember, in classical philosophical terms, the idea of intuition is a much deeper caveat to awareness than presented here. Intuition alludes to the shared recognition among all self-conscious beings that everything in our minds and senses could be illusory. Or semi-illusory. So to say something is a dog -- a separate entity in a fundamentally material world -- is an entirely intuitive statement. Dismiss this as solipsism if you will, but those who ignore this condition practice disruption willfully ignorant of life that flow beyond their small playgrounds. To admit to the true scope of one's intuition is an example of root-level humility, which is lacking around here. Because it's more comforting to simply scoff at anyone who remains agnostic about the dog.

Expand full comment
founding

> You can’t find the best economist by asking Keynes, Hayek, and Marx to all found companies and see which makes the most profit - that’s confusing money-making with the study of money-making.

A related passage from Xunzi:

> The proper classes of things are not of two kinds. Hence, the person with understanding picks the one right object and pursues it single-mindedly. The farmer is expert in regard to the fields, but cannot be made Overseer of Fields. The merchant is expert in regard to the markets, but cannot be made Overseer of Merchants. The craftsman is expert in regard to vessels, but cannot be made Overseer of Vessels. There is a person who is incapable of any of their three skills, but who can be put in charge of any of these offices, namely the one who is expert in regard to the Way, not the one who is expect in regard to things.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Re. "But I recently reviewed the discourse around Ajeya Cotra’s report on AI timelines, and even though everyone involved is a math genius playing around with a super complex model, their arguments tended to sound like “It still just doesn’t feel like you’re accounting for the possibility of a paradigm shift enough” or “I feel like the fact that your model fails at X is more important than that my model fails at Y, because X seems more like the kind of problem we want to extrapolate this to.” ":

Another example: Around 1990, the people in AI trying to develop general-purpose intelligent agents split into 2 groups: symbolic AI vs. reactive behavior robotics. (There was also a big fight at that time between symbolic AI and neural networks, but that was mostly about classifiers.)

But there was little if any debating in the debate. It was obvious from the start that symbolic AI was better at playing chess, and reactive AI was better at not running into walls. Mostly people just argued over which kind of problem was more-important: playing chess, or not bumping into walls. The most-cited paper from that "debate" was probably Rodney Brooks' 1990 article "Elephants Don't Play Chess", which you don't really have to read once you've read the title.

Expand full comment

The more calculus which is done to derive derivatives further and further from reality and lived experience, the more degrees of freedom we allow ourselves…and we risk making up huge philosophical systems of incredible coherency, clear arguments, and total harmony of concepts which has…zero purpose beyond being a thought bauble. A child running away in a tantrum proclaiming they were really right before slamming their bedroom door and flicking the light switch on and off 8 times.

The abstractions to ‘higher’ order thinking can make the higher order thinker feel better and like they’re doing the more important thing that very few people are capable of doing. Which makes them special and part of that small smarter group of people dragging humanity kicking and screaming through history with all the progress due to them.

But this is silly and it has certainly been a group effort with many ways of participating. Another model might be to have the useless king philosopher of self rationalisation at ‘the top’ with some scientists below him trying to connect ideas to reality, then below them some engineer actually making things in reality, and below them all the unwashed masses of idiots who are the final arbiters of utility in terms of if they find any value in whatever the worker made which the engineer designed which the scientist made models of and which the philosopher rationalist umm….vaguely thought about what everyone else was doing?

I’m not sure if that part matters, but we can point out a handful of paradigm shifts…which were a reflection of changes in engineered reality or biological reality more often than they were brilliant thought designs of historical rationalists.

It can be difficult to perform science with rationality as the answer is science where we try out everything and see what works. It can be difficult to turn an idea or model of how things work into an actual product when translating science to engineering. And it can be harder still to figure out what people need vs what it is possible to build, hence all the very different things that get made at great effort and cost only to be ignored or abandoned.

Each layer of translation and performance across and within these abstracted layers has value. I think a big trap is going ‘up’ this ladder and proclaiming higher is better or that my part in it is better.

Oddly one would expect the rationalist philosopher to see this the most, but it is often the opposite with arrogance growing as you go ‘higher’ in these abstractions away from reality. But it can be he more socially powerful tool of humble and observation and use of what is made where one can find a simpler appreciation for what everyone involved does to make it possible to get a smartphone into their hands. Often we see people using their degrees of freedom to big up themselves since they allow more choices and are less fixated on reality.

And if you might think to say that I’m privileging base reality and human experience as the core measurement here, then I’d point you back to the thought baubles comment at the start of my post. Skipping steps of ideas about ideas to ideas about reality to doing his in reality to a king things happen and finally to use things in your life….is a difficult path and whoever is good at one or some of those steps is rare and whomever can do all of them is exceedingly rare.

We risk the mistake of the rationalist up high also being a human user of things and thinking to themselves they can do every step in between! I can science, I can engineer, I can build…just because you can think and use doesn’t imply much about being able to distill the infinite thought space down to the broad space of using things to help you survive.

Expand full comment

I feel like the distinction between doing rationality and studying rationality has a tidy analog in sports. When you practice a sport, you tend to do things *very* methodically. There's a "right" way to do everything, and you practice as slowly and in as small of chunks as you need to in order to become proficient. When you compete, however, you just do stuff. Of course you do your best to implement all of the techniques you've been practicing, but when you're going against an opponent, you're going to do a lot of things in ways that are technically suboptimal, because you have to prioritize speed, or because you're off balance, or because you need to answer something your opponent has done. If you were to slow down and and employ "perfect" technique, you'd lose terribly. Even in golf, which is not fast or violent and doesn't have an opponent interfering, coaches will often tell someone that they need to "play golf, not golf swing" when that person is thinking too much about technique during a round.

I think it's rather interesting how unremarkable it is that sports are like this, that *obviously* you don't compete the way you practice, and that might give us some useful information about the use of rationality.

Expand full comment

From the perspective of a philosophy student the rationalist community uses "rationality" in a really weird way. As far as I can tell, most people here either use "rational" to mean "good" (with all the vagueness of ordinary-language deployments of "good") or they use "rational" to mean "employing one of the methods from our bulleted list of rational methods". Treating rationality as a study of studies has some intuitive appeal, but still, weird. There's hardly a trace of the Western philosophical orthodoxy that rationality is a basic cognitive capacity to step back from, evaluate, and adjust our goals and beliefs so that they are consistent with each other and the world. People who are skilled at reaching consistency, realizing the aim of rationality, we call *wise*. Wisdom can involve intuition or explicit methodologies; any strategy is admissable, as long as it works. If we use "rationality" to refer only to the publicly shareable strategies, then what do we call the game in which these are strategies?

Expand full comment

Scott’s characterization of Gardner’s reasoning is spot-on, for most of the text he doesn’t address any of Pinker’s arguments. The only actual ‘criticism’ is in the end with:

“The best chance for our planet is for us to be able to intertwine these Axial strains of thought and feeling. As Pinker would presumably agree, they entail reason—but respect, relations, and religions as well… and I would place the emphasis more on the latter three”.

Basically, Gardner doesn’t even have a problem with any of Rationality’s main propositions, but in its emphasis, something like:

‘Sure, rationality is important, but at the end of the day what decides who does evil or not isn’t how well they solve the Monty Hall problem, but if they have the other RE’s. They could be as rational as they want, and still be autocrat nazis/commies who don’t respect people or religion and do atrocities’

Might be going on a complete detour here, as part of Scott’s piece was focused more on the rationality vs anti-rationality aspects of debate, which in itself is clearly important and interesting, but there’s this missing thread, if you’re squinting hard while reading. Namely, that Pinker makes a book about rationality, arguing that it is good and the world would be better off with more of it, and Gardner is saying the expected value in terms of world quality would be higher if one were to focus on these other factors of respect, relations and religion.

Which is the kind of general argument that can be overapplied conveniently. Russia and Ukraine, that’s just Putin not being able to respect Ukraine’s history and its people. Major conflicts in the middle east, that’s a lot of religion. Racial profiling and police violence, if only we had more diverse relations.

But at its core, the claim that if there are two group of people one whose focus was “maximize rationality”, and another who had “maximize the other three RE’s (in the very peculiar way Gardner describes them)”, and the prediction that in the end the second group would be better off seems sufficiently non-trivial to merit a valid criticism.

Minor observations:

My use of religion for Gardner’s argument is somewhat misleading, he uses it in a different way, and emphasizes that ideally religions should be:

“personal belief systems ... not used to cudgel, pressure, or—indeed—make war on others”

Which is weird because A) religion is a belief system shared with others, and B) It usually is used to cudgel, pressure, and make wars; Gardner takes these as clear bad aspects, whereas those might be its driving features

Expand full comment

I hate to be a Debbie Downer, but I think the whole rationalist enterprise is a utopian pipe dream. My a-rational reasoning goes thus. It seems to me that 99+% of our mental activity is done without formal reasoning (and in many of us it's done without words). Yes we can use the tools of rationality to systematically observe the world through our sensorium, create hypotheses, and then test those hypotheses, but we risk being undermined by our pattern recognition systems, by our cultural prejudices, and by our instinctive reactions. Eventually, we *may* stumble across truths, but ultimately all conclusions drawn through rationality must remain provisional at best. <says I waiting for the screams of outrage>

I've enjoyed immensely the discussions I've had on Astral Codex, but I'd have to say I haven't seen much rational argument on this list (Scott's postings aside). My question is this, if we're all invested in rationality, why aren't we using it more? My provisional answer is that most of us aren't even aware of when we're arguing a-rationally.

Expand full comment

I think that most people who attack rationalism are attacking one of two overlapping things.

Firstly, and most commonly, they may be attacking rationality in the sense of "the ways rationalists (i.e. people who think that Scott Alexander is the rightful caliph) think". That includes a lot of stuff that doesn't really line up with what rationalists think of as "rationality"; I think it contains a lot of stuff that should be attacked – notably, knee-jerk hostility to social justice politics that I think often goes well beyond what is justified - but that describing those things as “rationality” rather than as “regretable quirks of the current aspiring rationalist movement” ought to be avoided.

Secondly, and more interestingly, I think they may be attacking using your near view too much and your far view too little.

Suppose I have to answer a hard object-level question. There are two things I can do:

:- I can try and work out the answer for myself (“near view”),

:- I can substitute the question “what is the balance of other people's opinions, weighted by expertise, on X?” for “what is the right answer to X” for (“far view”).

Far view has the disadvantage that it's possible that the question I answer may have a different answer to the question I'm interested in, but it also often has the advantage that I'm much more likely to work out the correct answer to it.

Given the choice between trying to measure the flip of a really-eroded coin where I'll mistake heads for tails 40% of the time by looking at that coin, and trying to measure it by looking at a coin that comes down the same way 80% of the time and where I can tell which face I'm looking at 90% of the time, I'm much better off with the proxy.

To me one of the striking things about the rationalist community is that a lot of the philosophising emphasises far view over near view (in my opinion wisely), but the actual discourse of rationalists is noticably more near-view-centric than Blue Tribe discourse - “far view” isn't quite the same as “trust the experts”, but it's pretty close to it, and that's a value that the Blue Tribe generally subscribe to and the Grey Tribe generally scorn.

The most valuable post on rationality I've seen was https://thingofthings.wordpress.com/2015/10/30/the-world-is-mad/, by Ozy, making the point that when you've realised that everyone else is essentially incapable of consistent rational thought*, there are two obvious responses – one is “therefore if I can become capable of, or even better at, consistent rational thought, I can win big”, and the other is “and therefore I should assume that I am also incapable of consistent rational thought, and be accordingly cautious”. I'm firmly in the second camp – I think that as a way of being confidently right about important questions more often rationalism has limited value, but as a way of being confidently wrong about important questions less often it's invaluable.

*Many people are capable of thinking fairly rationally much of the time. No-one is capable of avoiding being glaring irrational on occasion. Pretty much no-one is capable of telling when those occasions are for themselves, although it's often quite obvious when it happens to other people.

Expand full comment

Ok but here's the real question: why is Newcomb's paradox framed as if you're making a choice? Doesn't the problem imply that you exist in a deterministic universe, and therefore that you will just do whatever it's already been determined you will do?

Expand full comment

I'm not sure this is relevant; but I had a girlfriend who always used emotional responses to my logical assertions. One day I decided to start off with an emotional assertion and she countered with logic. After a moment of stunned silence I asked her "When did you start using logic?", and her response was "I'll use whatever it takes to get my own way!"

Expand full comment

The assumption that one boxing in Newcomb's problem wins has always bothered me. I mean it kind of straddles the line between being an assumption that you can't question because then you'd be fighting the hypothetical; and being a deduction that is presumed to arise due to the structure of the problem but if you look closely at the problem as written it is not at all clear that one boxing wins (btw, it's also not clear that two boxing wins although that does seem more likely in general). You would need a bunch of extra unstated assumptions to make that deducible but if you point that out then you're back to fighting the hypothetical.

But you have to fight the hypothetical because if 'one boxing wins' is just allowed as an assumption then the thought experiment has no more relevance to decision theory than saying, "I'm going to roll a fair die. You can bet on 'six' or 'not six'. By assumption betting on six wins. So make sure your decision theory can properly handle this case." It just becomes a ridiculous non-sequitur.

As soon as you make the assumptions that allow you to deduce that one boxing wins explicit it allows you to notice how limited the scope of Newcomb's paradox actually is. For practical purposes it will almost never happen and almost all real-life (and even most hypothetical) situations that look like a 'one boxing is optimum' Newcomb's problem really aren't. (they may be Newcomb-like like the 'prisoners dilemma' but I'm talking specifically about the Newcomb's problem Eliezer outlined in 'Newcomb's Problem and Regret of Rationality'.)

One boxers typically don't think about the ways that a predictor could manage to be highly accurate that don't imply one boxing as the optimal strategy.

For example there could be selection effects. Like the predictor only offers the dillemma to those who have publicly committed to 'two boxing' since 'one boxers' have a motive to defect.

It could be iterated (as indeed is implied in the thought experiment) in which case the predictor could simply always predict two boxing. The choosers could deduce this strategy as one possible way that a predictor could have higher than expected accuracy or perhaps just have observed prior iterations.

The predictor could employ a transparent prediction algorithm in order to make it's prediction itself predictable. As above the prediction would need to always be 'two boxing'. But in this case it doesn't even need to be iterated.

Even in the case where the predictor is not taking any of these easy routes the details of how it is actually making it's predictions really does matter (unless it's literally infallible).

Expand full comment

What’s the fuss about? These two - Pinker and Gardner - are elevating each other with faint criticism. On a practical level, their points of disagreement are trifling.

One of them is a Yankee fan and the other likes the Red Sox. No need for jihad here.

Expand full comment

> If I get an email from a Nigerian prince asking for money, I’m not going think “I shall do a deep dive and try to rationally calculate the expected value of sending money to this person using my very own fifteen-parameter Guesstimate model”. I’m going to think [...] we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

So, you tell him to send you his sister's IBAN?

Expand full comment

I think a good translation of 'something something white males' would be something like an analogy to how 'meritocracy' is used as a version of the 'just-world hypothesis' to justify current outcomes and say they're both natural and correct.

Something like 'while rationality as a concept includes a lot of good ideas about math and thinking, rationality as a social movement is mostly an aesthetic shibboleth used by some rich white males to recognize other like-minded rich white males and form close and exclusive networks with them, networks which unjustly further race-and-gender inequities, and which justify themselves via the aesthetic of 'we deserve this because we're rational meaning we're doing a better job than other people', despite little actual evidence for this.'

Expand full comment

> One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

Amazing.

Expand full comment

If it turns out there are precisely 7000 contradictions in the bible I'm kissing cheeseburgers goodbye.

Expand full comment

"Fine, but I need fifteen people to bond super-quickly in the midst of very high stress while also maintaining good mental health, also five of them are dating each other and yes I know that’s an odd number it’s a long story, and one of them is secretly a traitor which is universal knowledge but not common knowledge, can you give me a tradition to help with this?"

Which science fiction story is this?

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

Bailey Rationalist: Bayes! Probability! Formal Logic! Game Theory! Expected Utility! Prediction markets! Avoid the biases! Do this and you will be right!

Anti-rationalist: I think there is more to reasoning than these things. For example, within the classic list of "cognitive biases" there are things which you probably should do (e.g., Loss Aversion makes sense if you might lose all your money). And there are more intuitive and narrative ways of reasoning that help people to make sense of the world in ways that are useful.

Motte Rationalist: Oh of course. That is not what we mean. Rationality is just systematic winning. So if it is helpful, then it is rational. Even if its something like intuition.

Anti-rationalist: Oh. Ok then

Bailey Rationalist: Bayes! Probability! Formal Logic! Game Theory! Expected Utility! Prediction markets! Avoid the biases! Do this and you will be right!

Anti-rationalist: You seem to be saying the same thing, and you seem to think that [bayes, probability, formal logic, etc.] are like...the end all be all. As in you seem to think these things are the best path to systematic winning. I disagree. The set of epistemological approaches you have chosen are NOT synonymous with winning.

Pinker: You are using rationality to argue against rationality! Checkmate atheist!

Anti-rationalist: Not at all. I am saying your set of prescriptive theories [Bayes, logic, game theory, etc.] are not synonymous with "instrumental rationality". And you keep acknowledging that this is the case, but you seem to rely way too much on them. Have any of you read Gary Klein? Gigerenzer? Or how about Nassim Taleb? It seems to be the case that most successful people draw more from those thinkers than from Kahneman/Tversky or Yudowsky (and it pains me to put Yudowsky next to those names). It seems like the set of prescriptive theories you are applying everyday are great for specific things (like arguing with people online), but not the real world. As Agnes Callard points out in Aspiration, Expected Utility is great for medium decisions like buying a car, but terrible for small problems (which cereal to get) and large decisions (i.e., decisions that result in you changing your utility function).

Rationalist: No. These theories are just how science works. This is the Correct Epistemology. And Bayes gives you the objectively correct answer. How can you argue against Bayes? Bayes is always right! The math doesn't lie! Shut-up and multiply!

Anti-rationalist: Well, that is obviously false. Bayes does not give you the objectively correct answer, despite what Yudowsky claims. You seem to be optimizing for the activities that people like you (white males) do; science and tech. (and smuggling in your intuitions and calling them objective and less wrong)

Rationalist: AH! I get it now. You're just an SJW trying to support "alternative" epistemologies that put personal experience above logic.

Anti-rationalist: Not an SJW. What I am trying to point out is that you are pushing for a certain set of epistemological theories that are great for people like you, but are less useful for the majority of people who are dealing with normal human issues like how to write a resume (as opposed to debating the expected utility of funding a biotech start-up.)

Rationalist: But Game Theory would be useful for writing a resume!

Anti-rationalist: Not for the people that are innumerate or simply just dont know Game Theory

Rationalist: Then we shall teach the world Game Theory and lead the world into a glorious rational future! And everyone will win all the time because they will know Game Theory!

Expand full comment

One way to think of it is "rationalism is the idea that thought ought to be optimized either directly by explicit reason or processes endorsed by explicit reason". Depending on the problem you might use explicit reason, or a heuristic that explicit reason tells you to use, or an intuition explicit reason tells you to trust, or a heuristic recommended by an intuition explicit reason tells you to trust, etc., but ultimately it cashes out in explicit reason.

Expand full comment

At least some of the anti-rationalists (me, kinda) see the term as meaningless in itself, and mainly used as an attack on things that Aren't rational. My first exposure to the subculture was a long, impassioned argument about social justice that ended with a rousing defense of the JQ, which poisoned the well for a longgggggggg time.

Eg, you get exposed to too many spiritually teenage objectivists, and you correlate "rationalists" and "assholes".

Expand full comment

I don't know if I can contribute anything, but here are some quick reflections:

1. ' The rationalist community' is rather distinctly different from the philosophy of rationalism. The first is broader in scope, is willing to consider heuristics and intuition as you mention, and is generally more interesting. I don't know how to best refer to these two groups in discussions, but what follows will be about rationalism as practiced by the rationalist community and not as defined by more time worn philosophers.

2. In addition to rationality being metacognitive as Scott mentioned, rationality seems to admit the possibility of hard mistakes. It's not that this is philosophically insightful, but it contrasts sharply with how most discussions are carried out. In this sense, it's a kind of cultural norm. It's not that we can never use heuristics or intuition, but that we should at least be open to testing them occasionally, recognizing that we're filtering all our problems through brains more designed for survival than truth seeking, recognizing the possibility of counter-intuitive outcomes and exploring those possibilities in spaces that are relatively safe from negative social outcomes, steelmanning rather than strawmanning opposing arguments, etc. In short, rationality includes an enlarged Overton Window, subject to normal constraints of time and energy.

3. A Game Theory examination of a game of Chicken holds that one strategy is to take your steering wheel and throw it out the window, in a way that your opponent can recognize that you're no longer in control of your vehicle. By eliminating your capacity to submit, you increase the chances that the other person will do so. Of course, this is brinkmanship and you also increase the likelihood of catastrophe. I feel like there are a sizeable number of people who employ this strategy on the meta level. This is "Argument as War" where to even admit to some truth that favors an outgroup is tantamount to treason to an ingroup. The Rationliast Community seems a little bit less likely to engage in this kind of "argument as war" strategy, again resulting in a larger Overton window, subject to normal constraints. I tend to think of avoiding this kind of intellectual brinkmanship as 'playing the long game' since you potentialy end up with a better mental model. But I suppose there's an argument to be made in favor of publicly and politically favoring your ingroup and only indulging in mistake theory in private.

Phrased more bluntly, perhaps the rationalist community places a lower weight on the utility of self-deception and ego preservation.

Expand full comment

This is a good exploration of what rationality is, along with some pretty good ideas about what skeptics of rationality often mistakenly think it is.

And Pinker's tweet is a pretty good jumping-off point for it, with his humorous quip about those skeptics.

It's kind of unfortunate that this is the opportunity he took to make that quip, though, because it doesn't really apply to Gardner's critique. What Gardener wrote is quite positive about rationality. He just thinks that the most world-improving thing to teach people right now—the most "rational" thing to teach them, perhaps—is something other than how to be more rational. This isn't an anti-rationality position.

Expand full comment

When I see the argument from cultural evolution, I'm hugely bothered at the circularity of it. This argument is usually deployed by people arguing for the preservation of some cultural practice, by saying "cultural evolution put it there".

But cultural evolution is ongoing, it's not a done process! Arguing for the preservation of some practice, for its modification, for its removal, or for its replacement, is just as equally 100% cultural evolution in action. Just because you've read some Joseph Henrich doesn't mean you get to stick your head out and pretend that your arguments in favor or against whatever cultural practice are somehow *above* cultural evolution, rather than plain old part of it!

Imagine an actual gene developing a little loudspeaker and trying to plead with its environment, "Hey, millions of years of evolution put me here, don't mess with me!". Wouldn't that be completely absurd? In an evolutionary context, you have to *keep proving* your worth, or you're out, would be the environment's answer.

Expand full comment

One problem here is that rationalists and these critics are talking about two completely different meanings of rationality.

You are talking about rationality as a method of generating knowledge and making decisions, and your self-identification as a rationalist means that you spend time and effort improving your knowledge-generation and decision-making frameworks.

Critics of "rationality" are talking about the reframing of moral arguments as "rational", usually as a way to maintain the status quo.

Steven Pinker is famous for this. He's the guy who said the words "progressives hate progress" and proceeded to demonstrate using graphs and charts that progressives are idiots and morons for trying to make the world better, because as you can see from slides 32 and 47, the world is *already* better.

A more recent example would be Tim Pool's statement "I despise appeals to emotion". Well that seems perfectly innocent and rational, right? The thing is, he was referring to President Zelenskyy's statement "We desire to see our children alive." Zelenskyy is making a moral argument - that Russia shouldn't kill Ukrainian children (note this was after at least one school had been bombed), and that the West should help prevent that outcome. Pool is reframing this argument in terms of a well-known irrational fallacy (appeal to emotion).

Now, I think everyone would agree that we should approach the problem of the Russian invasion of Ukraine rationally (what would it even mean to say we should approach the problem irrationally?). But at the pragmatic level, Zelenskyy is making a moral argument because moral arguments are more likely to motivate people to act than logical arguments. Rationally, prevention of suffering is good. Rationally, convincing people to help prevent suffering is good. But "rationality" is supposed to prevent people from utilizing rhetoric in order to motivate people to behave rationally?

Note that the problem here isn't rationality itself, but the invocation of the "rationality" frame to take rhetorical power away from an important call to action. If you're going to die, it's certainly rational to attempt to elicit empathy from someone who could save you. I suppose Tim Pool would have been happier if Zelenskyy had provided a 3000 word essay about why he desires to see his children alive, with citations from the field of evolutionary psychology?

To go back to Pinker - it's rational to be upset about, e.g., global poverty, and want to do something to stop it. It's also rational to be happy about the fact that global poverty is declining overall. The debate between progressives and Pinker, however, is that progressives want to *highlight* the bad in order to motivate people to help solve the problem, whereas Pinker wants to *highlight* the good... for what purpose? What rational end is Pinker pursuing? It often seems he's merely being pedantic. He has no policy proposals on this issue - it just bothers him that he feels progressives aren't telling the whole story.

I should stress that I think rationality is still important here - it's true that if you want to solve e.g. world hunger, you need to understand it, and it's true that you should adopt a rational approach to devising and implementing solutions. But it's also true that part of that approach will be competing for attention in a difficult media environment, and hearing Pinker talk about how global poverty is already declining undermines activists' ability to do that.

Another problem here is that when people like Pinker or Pool explicitly invoke the frames of rationality or argumentative technique, they're not actually being any more rational than their opponents. If you have a good argument why people shouldn't donate to Oxfam, make it. If you have a good argument why the West shouldn't help Ukraine, make it. "My opponents are simply irrational" isn't just an argumentative technique - it's an invalid one. Invoking the frame of rational argumentation often asks us to commit the ad hominem fallacy (my opponents aren't rational, therefore they are wrong) and the fallacy fallacy (my opponents used a fallacy, therefore they are wrong).

Note that alleged "anti-rationalists" never say "that argument was rational, therefore it must be wrong". They're not opposed to rationality. They're opposed to the debate tactic of strategically claiming or implying that your opponent is irrational.

Also note that in both of these cases, the "rational" framework is invoked in favor of the status quo: in Pinker's case he thinks we're already doing enough to end poverty (I don't agree), in Pool's he thinks the West should continue not being at war with Russia (I do agree). I don't think this is a coincidence: the frame of "rationality", as a rhetorical device, works a lot better when it's invoked in favor of something that feels familiar or is within the Overton window. And movements for change - activism for civil rights, women's rights, poverty eradication, environmentalism - often rely on advocacy that frames these issues as moral, or uses rhetorical techniques to elicit empathy, which makes invoking a rational framework an effective countermeasure.

I just want to be completely clear again that this "invoking the frame of rationality to win arguments" thing is NOT what I think that Scott, Eliezer, or self-identifying rationalists in general are doing when they do rationalism. Rather, it is what people who have limited or no experience with rationalists experience when someone like Steven Pinker advocates for "rationality", or Tim Pool derides argumentative fallacies, especially in the context of a political debate.

Now to address the controversial part:

Nowhere is this more apparent than in the "something something white men" example of an anti-rationalist argument. If white men are the people who benefit most from the status quo, and appeal to "rationality" is a tried-and-true way to defend the status quo, you can start to see why someone who wants to change the status quo would come to view white men claiming to be "rational" with suspicion.

Consider the typical, layperson's understanding of "rational" as "involving reason" or "prioritizing thought over emotion or impulse". If a man tells a woman that he's being "rational" (and by implication that she is not) this isn't a statement with no context - the context, of course, is the incredibly pervasive stereotype that women are emotional, and either less inclined to use reason or less capable of using reason. If a white person tells a black person that the white person is being "rational" this isn't a statement with no context - the context, of course, is the stereotype that black people are less intelligent, more impulsive, less civilized, and less capable of producing great works of artistic or intellectual achievement. When a white man invokes the frame of rationality in the context of a political debate, in other words, they're invoking stereotypes which were created to reinforce white male power at the expense of people who were not white men.

For the third time, just so there's no mistake, I am not saying that this is what Scott does or what rationalists do in general. But in the wild, it is incredibly common to find white men either implicitly or explicitly invoking the frame of rationality to shut down arguments from women and minorities or arguments that are framed in terms of justice or moral imperatives. Importantly, the people doing this are almost never actually more rational than their opponents - because it's not about promoting reason, but about using the trappings of rationality to undermine calls for change.

And again, Pinker is sort of the poster child for this, which is why I think that any association between Pinker and rationalists will ultimately be terrible for rationalists' public image (but of course it's irrational to worry about what other people think, right?).

One final point - there has been a subjectivist movement in modern epistemology that is associated with identity politics. The idea is that everyone is biased by their identity, so rather than adopting a fake "view from nowhere" it's better to start by acknowledging your relevant biases and the perspective that your identity gives you. Hence "as a white man I find that..." or "as a black woman I find that..." From this perspective, claiming to be "rational" is nothing more than a denial of how one's identity and experiences inform their beliefs. It's self-delusion. A proponent of this view might claim "you're no more rational than I am, but at least I'm willing to openly acknowledge the areas where my demographic circumstances might affect my judgment or make me prone to motivated reasoning."

I think there are certainly people who are a bit too zealous with this concept, but when I read something like Charles Murray attempting to scientifically prove that black culture is scientifically inferior to white European culture, I am at least convinced that we need to be wary of bad actors who attempt to smuggle racism into the popular discourse under cover of science of rationality, and having people own up to their biases seems like it could help.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

When we are talking about 'rationality' in this context it's largely a sociological phenomenon borne of people who want to see how far computational tools and frameworks can go towards solving epistemological and social problems.

When I explain the Rational-sphere to intelligent philosophical outsiders, if I called it the study of truth-seekiing, they'd say, "yeah, bro... I teach epistemology at a big university, so how is this different?" I think the defining characteristic of the rational-sphere is a type of methodological utilitarianism. Not that everyone's utilitarian, I'm not. But that a general attempt to see if any given problem can be clarified or solved with Expected-Utility, cost-benefit analysis, Iterated game theory, Bayesian statistics, or causal inference. The background sociology is a communal heuristic or bet: these tools are underrated. If more people knew and used these tools, that would be better for those people.

Yes, there are other prominent elements, the biases literature, heuristics and intuition, legibility, philosophy of social science, and ML...

Expand full comment

Pinker made this "argument" before, more as in "arguing against arguing". It might come from his wife (pro-philosopher): Check this 15 min animated video by both https://www.youtube.com/watch?v=uk7gKixqVNU at 1:50 comes sth. to this effect. - He just repeated this idea in the tweet - without delving into "the deep math of intuition".

May I ask as I am still a newbie: Why the new rationalist seem to care so little about Pinker? (in my perfect world Pinker would own the NYT and Scott be the top-writer. Rationalist should see him as a member of the tribe, instead of: Meh. - I really don't get it. No one seemed to review the rationalist book. - Analogous: another Author: Scott Aaronson described "Viral" as one of the most important books of this century. 50% of comments say: Meh, Matt Ridley involved, must be BS, as he writes BS about climate. - When Scott started with climate, and got into data, he came up with pretty much the same conclusion: More people die of cold than heat. And: DON'T PANIC - have kids. - I think this are important truths for our time. - What the hell is wrong with Lomborg? - I consider them all to be grey-tribe. Can't we be a bit more catholic about Rationalism? Or is it Palestine-popular-people-front all way down?

Expand full comment

I really like this. One of the best from the recent posts. It describes nature of 'rationalist' project really clearly, and answers the question about its usefulness better than previous posts about it.

Also, I think it maps really well onto "Seeing like a state" discourse about metis and techne. Intuition (trained neural net - this conflation seems basically correct, it really is precisely that) seems to be about the same thing as metis (only metis could be at a group level too?).

Argument "against" intuition - that it doesn't scale, can't be made to meaningfully advance - is roughly the same as what I was thinking about when I worried rationalist community discourse is getting overly enthusiastic about these ideas (especially strong forms of argument from Chesterton's Fence).

Expand full comment

You know, I was going to write here about you're entirely off-base, about how this is not actually what the opponents of rationality think... and then I realized I should actually read this Gardner piece to see if what I was writing about actually applied to him. And, reading it, I gotta say... I have no fricking clue what Gardner is trying to say. Your reading makes more sense of it than I could have. So, uh, huh.

Expand full comment

"But Gardner claims to be Jewish, and I doubt he follows all 613 commandments"- quite difficult, given that around half of them are literally impossible now.

Expand full comment

March and Olsen identify two different "logics" of decision making -- the logic of consequences and the logic of appropriateness.

The logic of consequences is outcome-based, cost/benefit decision making. Rationality is an attempt to refine this sort of decision making. If you're talking about heuristics and intuition versus explicit analysis and so on, it's all taking place within this framework. So, perhaps there are some arguments to be had there about the best way to get the best outcomes, but that's not the real debate.

The real debate is with an entirely separate way of doing things -- the logic of appropriateness. Under the logic of appropriateness, you don't exhibit consequence-driven behavior at all. Instead, you're trying to do what seems right or appropriate to a given scenario by applying rules and principles. The goal here *isn't* to get an outcome. The goal here is to follow the rules and defend the associated sense of identity.

Most people are probably familiar with the relevant distinction in the field of ethics between consequentialists (logic of consequences obviously) and deontologists (logic of appropriateness). One can attempt to refine a given consequentialist ethical system and compare attempts in terms of the consequences they deliver. But you'll never get anywhere with a deontologist by trying to argue with them about consequences. That just fundamentally misses the point.

When you look at religion, tradition, respect, and relationships these are very often the domain of the logic of appropriateness and not of consequences. One can imagine a consequentialist logic, but this generally misses the point. For example, a consequentialist logic of religion is something like "follow these rules because if you don't, you'll go to hell." You get almost no mileage out of trying to understand religion in this way, though. People who use this framework are constantly frustrated with the inconsistency of religious practice and its apparent hypocrisy.

But if, instead, you imagine that religious people are trying to live their lives according to a particular kind of identity and sense of self where one takes some aspects of the relevant holy word very seriously and ignores other. This one be an insane thing to do as a religious consequentialist -- to tempt fate by breaking some of the rules. But if one is simply trying to "be a Christian" then it's very different, and the things it means to be a Christian can even be self-contradictory.

So too with tradition. One might follow tradition out of a belief in the wisdom of the ancients -- they knew better than us. But his is quite unlikely to be true in general and suggests, at most, a kind of weak deference. That's not how traditionalists actually think. They're trying to follow tradition as such because that's what it means to "be an X." Asking if this is better or worse in terms of some set of consequences misses the point.

Expand full comment

I don’t think this counts as anti-rationalist, but in my entirety subjective opinion, Pinker’s writing style always seems arid to me. It’s feels like the same 3x3 Rubik’s cube is being twisted into a pretty predictable array of patterns. If I go in thinking this guy is much more clever than I am, I’m about to learn something new, why can’t he occasionally say something I didn’t anticipate 50 pages ago?

I suppose it comes down to the fact that a brilliant thinker isn’t necessarily a great writer.

Expand full comment

> One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

That's a bit more conflicty than I usually assume the explanation is. I read "something something white males" as "the people in charge of economic/political/intellectual power tend to belong to a particular subset of the population, and this makes their motivations imperfectly align with the rest of the population". I like my phrasing better because it makes it clear that this argument is more general than "white men ruin everything"; it applies in any situation where the controlling elites of a particular group are sufficiently homogenous/insular/different-from-the-general-population. It also makes it clear that this isn't necessarily a problem of malicious elites.

It _also_ implies a slightly different solution; elites should consciously be aware of their motivational misalignment and correct for it if they want their population to be happy. Again, this is a similar but more general point than your (presumably tongue-in-cheek) suggestion about favoring black women.

Expand full comment

I suggest addressing three issues from the post and comments:

-lack of consensus over whether 'rationality' seeks truth or success;

-the name 'rationalist community' is confusing because of philosophical rationalism (e.g. Descartes's methods);

-calling oneself 'rational(ist)' seems like unfair, maybe prejudiced, rhetoric (and the perennial alternative name, 'aspiring rationalist', doesn't help).

From my perspective, what unifies this online community* is neither truth-seeking nor seeking to systematize winning, nor is it concern with AI x-risk, nor is it commitment to certain methods (although obviously a handful of them are consistently popular).

Instead, the unifying factor seems to be approaching problems like a student who will earn partial credit on a math problem even if the answer turns out to be incorrect. In terms from Scott's post, that's making clear what was explicitly calculated, what relies on a heuristic, what can't be justified beyond intuition, and what has been done to reduce a known cognitive bias (that last one is in other posts). And how these work together to suggest an answer.

My suggestion: picking a fairer online community name, e.g. 'Showing Our Work'.*

The name is agnostic on whether there's progress toward an answer (either true or successful). Maybe there is no progress around here. I do think the name captures the social purpose of participation for many, in a broader way than 'overcoming bias' or '(being) less wrong'. What if this is just an unproductive online community primarily for people with fond memories of math homework? Could be worse, right?

*I have no thoughts on how fully this covers the Bay Area community and its members' AI x-risk initiatives.

**Perhaps being SOWs would have a minute independent effect on humility, like tonsuring a monk, beyond dropping 'rationalist'.

Expand full comment

I'm not really part of this community, and this has nothing to do with the discussion at hand, but for the record, the offhand comment about Gardner being Jewish and not necessarily following 613 commandments is a good jab but poor comparative religion. Yes, medieval scholars came up with 613 commandments- though their lists didn't always agree- but anybody with a bissel of Jewish knowledge knows that most of the commandments are negative, many are conditional on being in the land of Israel, or having a Temple, or are only done by the priest, or the king, or in wartime, or situational (if X happens then do Y.) In the diaspora, excluding all the situational and negative commandments, it's only about 44 positive laws that apply to an observant Jewish person. There's a larger number of negative laws but some of those we do anyway, like don't eat lizards or bats, don't kidnap or commit manslaughter or lie in court proceedings.

But who knows if Gardner, and one would assume not Pinker, does those anyway. OK, irrelevant correction over with. Carry on.

Expand full comment

Even as a young boy I was always bothered by the way Vulcans were written on Star Trek. My head cannon was that, as is explicitly stated in the show, Vulcans have more intense emotions than humans, and so their philosophy/meditation/culture was actually focused on suppressing emotions, and not on being logical. This seemed a better fit to me because it always seemed 'obvious' to me that being logical(rational) was indistinguishable from following the best course of action in a given situation. If empathizing with a human crewmate during a crisis was the best way to improve their performance and increase the chances that you both live, then that was the logical thing to do.

Expand full comment
Mar 6, 2022·edited Mar 7, 2022

Thanks for the post, Scott. I enjoyed reading it and I had a big laugh about “rationality is important, but relationships are also important, so there”.

In your final paragraphs, you establish a distinction between theory and practice. These distinctions are useful simplified models, but it's important to point out that under the hood, theory and practice, fast and slow, reasoning and intuition, etc. are all manifestations of the same underlying black box, namely, cognition. This is self-explanatory and it is redundant of me to point this out, but a lot of people view intuition as some kind of magic. Rationalists and anti-rationalists talk past each other, but under the hood, it all boils down to cognition, regardless of what labels we use.

It's important that we define our ontological framework before we can talk more about cognition. Let's suppose that we subscribe to physicalism. Any understanding of rationality entails an understanding of cognition, but our understanding of cognition is highly limited at this point in time. However, within physicalism, we do know that our brains create models of the external world. Some models are more accurate than others, in that they have a better correspondence to the world, and we can measure the accuracy of different models via experiment. Rationality is any process that helps us improve the accuracy of our models. Rationality is effective cognition.

Expand full comment

I am late to this thread and should probably just wait for the inevitable "Highlights From The Comments On What We Argue About When We Argue About Rationality" thread instead.

I think if you're going to define rationality as "the correct way of thinking about things in all circumstances" then it's un-criticisable, but that's not very interesting.

It's more interesting to consider the possible failure modes of how bad things can happen when you set out to be rational (but do it with a flawed human brain) and the circumstances in which things might have gone better if you'd tried a less systematic mode of thinking.

When you are "being rational", you are generally setting yourself some kind of target function, and then optimising your actions to maximise that target function. I think the two big failure modes are (a) you fail to make predictions correctly, or (b) you set your target function too narrowly and wind up sacrificing other good things.

I imagine we've all met (or been) the type of person who says "All I care about is [X] and so I'm going to disregard everything that doesn't help me achieve that goal" and winds up disregarding good manners and/or personal hygiene. That kind of thing seems like the most common failure mode for the "look at how rational I am" types.

I am reminded of Scott's essay on The Tails Coming Apart; trying to rationally maximise some set of values usually means sacrificing any value you're not maximising for. Hanging out in the middle space where you're not trying too hard to maximise for anything often leads to better results.

Expand full comment

1) a heuristic is an algorithm - it's rational. intuition is heuristic. this whole thing is circular. 2) how can you even begin to discuss rationality without an understanding of NP complete?

Expand full comment

The point isn't to be 'rational', it's to be right.

Expand full comment

Pinker's book fails to convince "anti-rationalists" that rationality is useful, important and necessary because it argues for rationality in a way that someone who already endorses rationality would want rationality to be presented to him. This is akin to trying to convert christians by pointing out logical inconsistencies in the Bible, or trying to convert atheists by invoking [arguments that Christians already take for granted]. In essence, Pinker's book is attempting to convert nonbelievers into believers by invoking hidden assumptions that only people who already believe share. And make no mistake, Pinker's veneration of rationality is at the very least pseudo-religious, if not fully religious.

It's also clear he doesn't realize this, both in the way the book is written and the tweet you shared - he talks and acts as if everyone shares these assumptions about rationality, such as for example, that the utility of large scale formalized rationality (science) necessarily means everyone should try optimizing for rationality in their daily lives. It's not that anti-rationalists don't think rationality is useful or good, it's just that they don't believe "rationality is the highest virtue" necessarily follows from "science is a vehicle for good in our society", which itself is not a given either.

This is what Gardner meant (i think, i could be wrong) about respect being more important than rationality, he's talking about rationality as a virtue, not rationality as the practice of "improving slow thought" as Chris Phoenix put it in one of the comments here.

Expand full comment

Seems to me that a fair amount of what's distinctive about 'rationalists' is that they try and have true beliefs even about stuff where there isn't much consequence to being wrong, or where being accurate is rude/taboo. Obviously this is a long way from the Yudkowsky definition in terms of wining. More like applied autism. (I don't mean that negatively, I'm autistic.)

Expand full comment

Scott, I think you've missed the essence of the disagreement (essentially because you're *not* an idiot). What Gardner is complaining about is PERFORMANCE and SIGNALING. He cares less about solving problems than political issues like assembling coalitions, indicating loyalty, and assigning blame.

Rationality does not do any of these; hell, most of the time it tells you that these are dumb concepts. If your priority in life is this sort of political drama, anything (ie Rationality) that indicates the drama is dumb, or that the teams that have lined up make no sense, or that the founding argument is ludicrous, is THE ENEMY.

We saw this play out (slowly) with Covid then at lightning speed with Ukraine. People who claimed to be rational and skeptical immediately dropped all filters when it came to believing whatever meshed with their convictions. We saw a constant stream on social media of stories that sounded good then were debunked a day later; but none of that had any effect, people still immediately latched onto any stories and claims that supported their tribe. And more than that, immediately attacked, in the most vicious ways possible, anyone saying anything (not opinions, simple facts) that went against the tribal story.

That's what Gardner is valorizing and Pinker is condemning: the prioritization of Tribe over Truth. Nothing more, nothing less.

Expand full comment

Fact check. The story about Srinivasan Ramanujan was not a story about a previously unsolved math problem. In fact it was a simple puzzle from a newspaper. GH Hardy, who recounted the story, tells of being able to find the answer himself in a few minutes using trial and error. Hardy posed the problem to Ramanujan, who was cooking a meal at the time. After a few moments of thought, Ramanujan dictated a continued fraction, the terms of which included Hardy's answer. The point of the story was that Ramanujan was such an instinctive mathematician he could immediately see the generalised solution.

Anyway, regarding rationality, the point about these sorts of discussions that always troubles me is that even clever people like Pinker seem to apply some sort of moral realism which suggests that the right thing to do can be determined purely by rational thought As if it were obvious that some sort of utilitarian calculus can always tell us the right thing to do. In fact, as Hume pointed out "Reason is, and ought only to be, the slave of the passions”. What he means is that the value judgement (which for Hume is grounded in passions) is always prior to the reasoned calculation, and cannot itself be derived using reason. A lot of people seem to misunderstand this quote as an attack on rationality. In fact Hume was the most rational of men. His point was simply that the decision to act is necessarily preceded by a value judgement about desired outcomes. P(A|B) = [P(A)*P(B|A)]/P(B) comes afterward, as we attempt to forecast the outcomes of possible interventions.

Expand full comment

What you're talking about here is what John Vervaeke calls "relevance realization". https://www.meaningcrisis.co/ep-28-awakening-from-the-meaning-crisis-convergence-to-relevance-realization/

Relevance realization is how one determines when to bring the machinery of rationality online and what to use it on. Just saying heuristics or intuition is glossing over a very deep subject he spends 25 hours talking about.

Expand full comment

Suppose we accept Yudkowsky's systematized winning definition of rationality. Even if I am pro-rationality, the teachings of the rationalists may be counterproductive at my current stage of the game I am playing. Of course, the rationalists may argue that my behavior is still rational in that case, but if they can't help me, what do I care what they have to say?

There are some interesting subcases, such as:

- Pretending to dislike rationality confers some benefit, like people thinking you are cooler

- I've already exhausted all the rationalist learnings on say, ballet, and now they are just a distraction

- I'm good enough at rationality and now I can better achieve my goals by doing something like joining the local church

- The community around rationality is not very useful to you. Ie, they all seem to want to send their money to effective charities while your VC friends want to take you out to dinner and fund your startup.

- Your utility function is holding you back and many other disciplines are better for the modulation of utility functions. Even scrolling someone else's TikTok is better for this than the study of rationality.

- You've concluded the popular areas of rationality aren't the lowest hanging fruit for improving your life, and now your rationalist interlocutors are just annoying. So you become anti-rationalist to lessen the spam you have to deal with.

- You find it hard to separate rationality from the rationalist community and their norms are simply annoying to you.

- Maybe the rationalist community doesn't seem that good at winning to you. Maybe you think they'd be more effective if they just banded together to shill a new currency and all got rich.

Expand full comment

Rationality is adapting. Evolution is the survival of the adaptable after all

Expand full comment

Someone will probably already have said this in the comment section somewhere but: rationality is a formalization of reasoning, in the same way as mathematics is a formalization of quantification.

Scott is considered to be rationalist-adjacent at the very least and how to put it, most of his content is more or less proof-checking, just for arguments, rather than theorems. Maybe that's an even better way to put it: rationality is the art of validating reasoning proofs.

This is irrelevant to Fermi estimates while being mugged - but oh so relevant to any lasting policy decisions.

And anti-rationalists would then simply happen to be the sophists and the demagogues - those whose arguments can't withstand sufficient scrutiny when taken apart bit by bit and checked for validity, or at least quantified in terms of their likelihood of being true, that is, they believe that the purpose of any given argument isn't in reality about finding out ways of actions or truths about the world but simply convincing others to go in a specific direction. It might be that they feel all validation is motivated, and if a 'rationalist' finds an issue with their argument, all they have succeeded at is making a better argument, and they might intuitively follow some sort of an argument-external truth dualism sort of an outlook.

Expand full comment

I just thinking of it as the pursuit of open source thinking.

Expand full comment

Going back and reading Gardner's actual essay, I don't see him arguing against rationality at all. Instead, he's saying that pure rationality is not enough to make the world a better place on its own, and listing some things he thinks are also important for that goal.

Expand full comment

Gardner is a "well-known wrong person"?

Gardner's theory of multiple intelligences is intuitively true as well as mechanically true-seeming. The lack of empirical evidence should be interpreted literally as "to be continued" and not proof that it's wrong. His theory is the opposite of g factor theory; so, invalidation of the former would imply validation of the latter, which isn't a slam dunk.

I want to elaborate on this later, but g factor and IQ are self-evident definitions: Of course, some factor exists that predicts a cluster of things that we consider to be good. We then retroactively assign the label "intelligence" to those predictive factors.

Expand full comment

Great distinction. Some of what you've written about here reminds me of John Vervaeke's concept of Relevance Realization, which I think you'd find interesting! He talks about how there's a paradox encountered when trying to talk about relevance, where in order to have a theory of anything you need to first have screened off what is irrelevant, but if you try to do that with relevance, you've already made a bunch of assumptions to the very question you're trying to answer. (Relevance as such comes up re classifying dogs & cats, for instance.)

Fascinating paper on the topic: http://contrastiveconvergence.net/~timothylillicrap/files/articles/relevance%20realization%20as%20an%20emerging%20framework%20in%20cogsci.pdf

Expand full comment

I disagree with the idea that someone who stupidly overthink in a crisis probably deserves to have been shot.

There are definitely ways I can argue that point, but I think the intuitive take will do.

Expand full comment