Comment deleted
Expand full comment

> Ted will ask you to give one of his talks.

As a counterpoint, the top TED talk by views is by waitbutwhy, a blogger whose only Amazon e-book is called "we finally figured out how to put a blog on an e-reader".

Talk: https://youtu.be/arj7oStGLkU

Blog: https://waitbutwhy.com

Expand full comment

Once again happy not to be a utilitarian. Good review!

Expand full comment

The repugnant conclusion always reminds me of an old joke from Lore Sjoberg about Gamers reacting to a cookbook:

"I found an awesome loophole! On page 242 it says "Add oregano to taste!" It doesn't say how much oregano, or what sort of taste! You can add as much oregano as you want! I'm going to make my friends eat infinite oregano and they'll have to do it because the recipe says so!"

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> ...happiness 0.001 might not be that bad. People seem to avoid suicide out of stubbornness or moral objections, so “the lowest threshold at which living is still slightly better than dying” doesn’t necessarily mean the level of depression we associate with most real-world suicides. It could still be a sort of okay life. Derek Parfit describes it as “listening to Muzak and eating potatoes”.

Except now, you have to deal with the fact that many-to-most existing people lead lives *worse than death*. Mercy killings of the ill-off become morally compulsory; they may not actively choose it, some may even resist, but only because they're cowards who don't know what's good for them.

Put the zero point too low, and consequentialism demands you tile the universe with shitty lives. Put it too high, and consequentialism demands you cleanse the world of the poor. There is no zero point satisfying to our intuitions on this matter, which is a shame, because it's an *extremely critical* philosophical point for the theory - possibly the *most* critical, for reasons MacAskill makes clear.

Expand full comment

The particular people who happen to be left remaining after the apocalypse is considerably more important than the availability of easily exploitable coal deposits.

People in many countries around the world are struggling to achieve industrialisation today despite a relative abundance of coal (available either to mine themselves or on the global market), plus immediate access to almost all scientific and technological information ever produced including literal blueprints for industrial equipment. That these people would suddenly be able to create industry after the apocalypse with no internet and no foreign trade, even with all the coal in the world readily available to them, is a loopy idea.

Medieval England was wealthier per capita than over a dozen countries are today in real terms, all without the benefit of integrated global markets, the internet and industrialization already having been achieved somewhere else.

I of course do not expect MacASkill to have written this in his book even if he recognized it to be true.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Gentle mention, you missed Sean Carrol, via his Mindscape Podcast, as a recent interviewer https://www.preposterousuniverse.com/podcast/2022/08/15/207-william-macaskill-on-maximizing-good-in-the-present-and-future/

Expand full comment

The repugnant conclusion seems unintuitive to me, specifically because it fails to consider the shape of the population-happiness tradeoff curve.

If you imagine this curve being concave down, then normal moral intuitions seem to apply: a large population that isn’t quite at carrying capacity is better than a much smaller, slightly happier population.

It’s really the concave up case that is unintuitive: where your options are a small happy population or a huge miserable one. But there’s no clear reason to my mind to imagining this is the case. Peoples utility of consumption seems to plateau relatively sharply, suggesting that a smaller society really wouldn’t unlock tons of happiness, and that having a giga-society where people still had net positive lives might not actually be many more people than the current 7 billion.

I don’t want to deny that it’s unintuitive that 20 billion people at happiness 10 really do outperform 1 billion at happiness 90, but I posit that it’s mostly unintuitive because it’d so rarely be just those two options.

Expand full comment

2 things: First, the "number of atoms" limit annoyed me when I saw it, since we can obviously get value from moving atoms around (sometimes even back to the same place!), so the possibilities of value-production are *much* higher than the constraints outlined.

Secondly, stealing my own comment from a related reddit thread on MacAskill: "The thing I took away from [his profile in the New Yorker] is that contrary to "near-termist" views, longtermism has no effective feedback mechanism for when it's gone off the rails.

As covered in the review of The Antipolitics Machine, even neartermist interventions can go off the rails. Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets! But at least we can pick up on these mistakes after a couple of years, and course correct or repriotise.

With longtermist views, there is no feedback mechanism on unforeseen externalities, mistaken assumptions, etc. All you get at best in deontological assessments like "hmmm, they seem to be spending money on nice offices instead of doing the work", as covered in the article, or maybe "holy crap they're speeding up where we want them to slow down!" The need for epistemic humility in light of exceedingly poor feedback mechanisms calls for a deprioritisation of longtermist concerns compared to what is currently the general feel in what is communicated from the community."

Expand full comment

“suppose the current GDP growth rate is 2%/year. At that rate, the world ten thousand years from now will be only 10^86 times richer. But if you increase the growth rate to 3%, then it will be a whole 10^128 times richer! Okay, never mind, this is a stupid argument. There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.”

This is a common economic fallacy. Growth is not necessarily correlated with resource production. For example, if you were able to upload every living human’s mind onto a quantum computer, you could feasibly recreate reality at the highest possible fidelity a human could experience while simultaneously giving every living human their own unique planet--all while using less than the mass of the Earth.

As another example, consider the smartphone. A smartphone is several hundred times more valuable than a shovel, and yet a shovel probably has more total mass. This is because the utility of the smartphone, as well as the complicated processes needed to manufacture it, combine to create a price far higher than the simple shovel.

So yes, we could become 10^86 times richer using only 10^67 atoms. You simply have to assume that we become 10^19 times better at putting atoms into useful shapes. Frankly, the latter possibility seems far more likely than that humanity ever fully exploits even a fraction of atoms in the observable universe.

Expand full comment

I always used to make arguments against the repugnant conclusion by saying step C (equalising happiness) was smuggling in communism, or the abolition of Art and Science, etc.

I still think it shows some weird unconscious modern axioms that the step "now equalise everything between people" is seen as uncontroversial and most proofs spend little time on it.

However, I think I'm going to follow OP's suggestion and just tell this nonsense to bugger off.

Expand full comment

"There are only 10^67 atoms in our lightcone"

Are there really? That doesn't seem right. There are about 10^57 atoms in the sun


So 10^67 atoms is what we'd get if there were about ten billion stars of equal average size in our light cone. This seems, at least, inconsistent with the supposition that we might colonize the Virgo Supercluster (population: about a trillion stars.)

Expand full comment

Conditional on the child's existence, it's better for them to be healthy than neutral, but you can't condition on that if you're trying to decide whether to create them.

If our options are "sick child", "neutral child", and "do nothing", it's reasonable to say that creating the neutral child and doing nothing are morally equal for the purposes of this comparison; but if we also have the option "healthy child", then in that comparison we might treat doing nothing as equal to creating the healthy child. That might sound inconsistent, but the actual rule here is that doing nothing is equal to the best positive-or-neutral child creation option (whatever that might be), and better than any negative one.

For an example of other choices that work kind of like this - imagine you have two options: play Civilization and lose, or go to a moderately interesting museum. It's hard to say that one of these options is better than the other, so you might as well treat them as equal. But now suppose that you also have the option of playing Civ and winning. That's presumably more fun than losing, but it's still not clearly better than the museum, so now "play Civ and win" and "museum" are equal, while "play Civ and lose" is eliminated as an inferior choice.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> MacAskill introduces long-termism with the Broken Bottle hypothetical: you are hiking in the forest and you drop a bottle. It breaks into sharp glass shards. You expect a barefoot child to run down the trail and injure herself. Should you pick up the shards? What if it the trail is rarely used, and it would be a whole year before the expected injury? What if it is very rarely used, and it would be a millennium?

This is a really bad hypothetical! I've done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it's still sharp, it's not a very serious threat (I've cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

When I think of happiness 0.01, I don't think of someone on the edge of suicide. I shudder at the thought of living the sorts of lives the vast majority of people have lived historically, yet almost all of them have wanted and tried to prolong their lives. Given how evolution shaped us, it makes sense that we are wired to care about our survival and hope for things to be better, even under great duress. So a suicidal person would have a happiness level well under 0, probably for an extended period of time.

If you think of a person with 0.01 happiness as someone whose life is pretty decent by our standards, the repugnant conclusion doesn't seem so repugnant. If you take a page from the negative utilitarians' book (without subscribing fully to them), you can weight the negatives of pain higher than the positives of pleasure, and say that neutral needs many times more pleasure than pain because pain is more bad than pleasure is good.

Another way to put it is that a life of 0.01 happiness is a life you must actually decide you'd want to live, in addition to your own life, if you had the choice to. If your intuition tells you that you wouldn't want to live it, then its value is not truly >0, and you must shift the scale. Then, once your intuition tells you that this is a life you'd marginally prefer to get to experience yourself, then the repugnant conclusion no longer seems repugnant.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

Any view that takes the average into account falls into the Aliens on Alpha Centauri problem, where if there are a quadrillion aliens living near Alpha Centauri, universal average utility is mostly determined by them, so whether it's good or bad to create new people depends mostly on how happy or miserable they are, even if we never interact with them. If those aliens are miserable, a 0.001 human life is raising the average, so we still basically get the Repugnant Conclusion; if they're living lives of bliss, then even the best human life brings down the average and we shouldn't create it.

Expand full comment

Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

Some religions do, but I'd be surprised to find a modern atheist philosopher among them. But if you accept the premise that preventing the existence of a future person is as bad as killing an existing person..

Expand full comment

The suppositions of misery - whether impoverished nations or sick children- to me always seem to leave aside an important possibility of improvement.

The nation could discover a rare earth mineral. A medical breakthrough could change the course of the lives of the children. A social habit could change.

In fact, while the last half millennium

has been Something Else, and Past Performance Is No Garuntee of Future Returns, it does seem that future improvements are, if not most likely, at least a highly possible outcome that needs consideration.

(Been a while since a post has contained such a density of scissor topics.)

Expand full comment

"they decided to burn “long-termism” into the collective consciousness, and they sure succeeded."

If the goal is "one-tenth the penetration of anti-racism" or some such, that at best remains unclear. It's worth dwelling on your identity as an EA + pre-orderer here and realizing that very few media campaigns have ever been targeted so careful at "people like you." Someone on Facebook asked if anyone could remember a book getting more coverage and I think this response would hold up under investigation:

"Many biographies/autobiographies of powerful people; stuff by Malcom Gladwell, Tai-Nehisi Coates, Freakonomics, The Secret… worth remembering that this is a rare coincidence where you sit impossibly central in the book's target demo. Like if you were a career ANC member, A Long Walk to Freedom would have been everywhere for you at one point"

Expand full comment

Slavery is very much still with us. It is actually legal in several African countries, and de facto legal in several others, as well as in various middle eastern locations. That is to say nothing about about the domestic bondage live-in servants are subjected to across much of south-east Asia, and covertly in various places across the U.S. and Europe, as well as the sex traffic. The world is a stubborn and complicated thing, and doesn't work as cleanly as thought experiments and 40,000 foot overviews would suggest.

Expand full comment

One possibility to consider is radical value changes.

Past people were very different from us today, and future people will probably be different from present humans. They will look weird.

To prevent radical value changes in the future requires global coordination that we presently don't have.

Expand full comment

The Eli Lifland post linked assumes 10% AI x-risk this century.

Expand full comment

Informative article. Thank you. I'm gonna steal your paragraph "if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know."

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

> MacAskill must take Lifland’s side here. Even though long-termism and near-termism are often allied, he must think that there are some important questions where they disagree, questions simple enough that the average person might encounter them in their ordinary life.

I think there's a really simple argument for pushing longtermism that doesn't involve this at all - the default behavior of humanity is so very short-term that pushing in the direction of considering long-term issues is critical.

For example, AI risk. As I've argued before, many AI-risk skeptics have the view that we're decades away from AGI, so we don't need to worry, whereas many AI-safety researchers have the view that we might have as little as a few decades until AGI. Is 30 years "long-term"? Well, in the current view of countries, companies, and most people, it's unimaginably far away for planning. If MacAskill suggesting that we should care about the long-term future gets people to discuss AI-risk, and I think we'd all agree it has, then we're all better off for it.

Ditto seeing how little action climate change receives, for all the attention it gets. And the same for pandemic prevention. It's even worse for nuclear war prevention, or food supply security, which don't even get attention. And to be clear, all of these seem like they are obviously under-resourced with a discount rate of 2%, rather than MackAskill's suggested 0%. I'd argue this is true for the neglected issues even if we were discounting at 5%, where the 30-year future is only worth about a quarter as much as the present - though the case for economic reactions to climate change like imposing a tax of $500/ton CO2, which I think is probably justified using a more reasonable discount rate, is harmed.

Expand full comment

Dwarkesh Patel has a series of pretty good posts related to unintuitive predictions of growth.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Everyone talks about the Repugnant Conclusion, but nobody talks about the Glorious Conclusion: instead of adding slightly-less-happy people and then equalizing, you can add slightly-more-happy people and then equalize. The second option is obviously better than the first. The obvious end point of this is infinite people who are infinitely happy. So that's the true moral end point of total utilitarianism.

Why does no one talk about this? Because no one believes that you can actually in the real world create people with arbitrarily high happiness. Whereas we actually know how to create people with low levels of happiness.

But then the Repugnant Conclusion depends on having at least some realistic assumptions about what's possible and what's not. Why not go all the way and add all the missing realism?

Creating unhappy people costs money. Money that could have been spent on making existing people happier. This is a tradeoff and it probably has an optimal point that is neither of the two extremes of having only one ultra-happy person or having a quadrillion suicidal people.

Expand full comment

A couple of observations I have about the EA movement in general...

It seems to me that those people made rich by a nation or region's hegemon status feel strongly drawn to develop theories of "how the world should be" - how to make things better, or give a better world to our children.

I think it all looks good on the surface. And of course wealth gives us the free time to introspect upon these things. But underneath, I think there's a lot of colonialism in there. It's like the group psyche of the well-off middle-classes seeks to both expunge its own sense of guilt for how hegemon status was achieved, and to reinforce its level of cultural control through developing "improvements" that benefit other races, whilst still preserving hegemony.

Expand full comment

Nice list of publications where WWOTF was featured! Let's not forget of all the videos.

Kurzgesagt: https://youtu.be/W93XyXHI8Nw

Primer: https://youtu.be/r6sa_fWQB_4

Ali Abdaal: https://youtu.be/Zi5gD9Mh29A

Rational Animations: https://youtu.be/_uV3wP5z51U

Expand full comment

It's interesting that towards the end of his career, Derek Parfit embraced Kantianism and tried to prove in his final book that it leads to the same conclusions as utilitarianism. It seems to me that the paradoxes in "Reasons and Persons" should point us in the opposite direction.

Kantians and utilitarians disagree on first-order issues but they start from similar metaethical premises. They think that most moral questions have an objectively correct answer, and that the answer is normally one that identifies some type of duty: either a duty to maximize aggregate well-being, or a duty to respect individual rights.

If you're an evolutionary naturalist you shouldn't believe those things. You should believe that our moral intuitions were shaped by a Darwinian process that maximized selective fitness. This implies that they weren't designed to produce truth-tracking beliefs about what's right or wrong, and it strongly suggests (I don't think it's a logical implication) that there *aren't* any objective truths about right and wrong.

Under those circumstances it's predictable that our intuitions will break down in radically hypothetical situations, like decisions about how many people should exist. Now that human beings have the power to make those decisions, we've got to reach some sort of conclusion. But it would be helpful to start by giving up on ethical objectivism.

Expand full comment

This seems a good place to briefly vent about this slightly maddening topic and an atomistic tendency of thought that is in my opinion not helpful in moral reasoning.

For example, these thought experiments about 'neutral children' with 'neutral lives' and no costs or impacts is not getting to the root of any dilemma. Instead, it is stripping away everything that makes population dilemmas true dilemmas.

In actual cases, you have to look at the whole picture not just the principles. Is it better to have a million extra people? Maybe? Is it better to have them if it means razing x acres of rainforest to make room for them? Maybe not? It will rarely be simple. And it won't be simple even if there are 10^whatever of us, either. Will it be better then to expand into the last remaining spiral arm galaxy or will it be better to leave it as a cosmic nature park, or unplundered resource for our even longer term future? Who knows?

I also think a holistic approach exposes a lot of the unduly human-experience-centred thinking that is rife in this whole scene. I think many people care about wild species and even wild landscapes – not just their experience of them, but the existence of them period. Should we therefore endeavour to multiply every species as far as we can to prevent the possibility of their wipeout? No, because all things are trade-offs.

The world is too complicated for singly held principles.

Expand full comment

The argument that we should aim to reduce unhappiness rather than maximise happiness has always been more persuasive to me. Happiness is something we can hardly define in real life, but people will certainly squeal when they are unhappy! Plus in negative utilitarianism you get to argue about whether blowing up the world is a good idea or not; which is a much more entertaining discussion than whether we should pack it full of people like sardines.

Expand full comment

This stuff is silly and just highlights how the EA people don't understand the fundamental nature of *morality.* Morality doesn't scale - and that's by design. Morals are a set of rules for a particular group of people in a particular time and place. They aren't abstract mathematical rules that apply to everyone, everywhere, at all times and in all places.

Expand full comment

Call me a contrarian if you want, but I don't think that I have a 1% chance of affecting the future. I have about a 0.000025% chance of affecting Los Angeles, and that's me being optimistic. Maybe someone like Xi Jinping, who can command the labor of billions, could pull it off; but even then, a whole 1% seems a bit too high, unless he wanted to just destroy the future with nukes. Wholesale destruction aside, the best that even the most powerful dictator can do is gently steer the future, and I doubt that his contribution could rise to a whole percentage point.

Expand full comment

I think an extremely important reason to prioritise animal welfare is AI risk. A learning AI would likely base at least some of its learning on our moral intuitions. And we would be pretty close to animals for a super intelligent AI. How we treat animals might affect how AIs treat us!

Expand full comment

I guess I’m the first Scottish person to read this, so let me formally object to MacAskill being described as an ‘English writer’, on behalf of our nation

Expand full comment

>There are only 10^67 atoms in our lightcone

Meh, I wouldn't give up quite that fast. Sometimes I think about fun schemes to try if the electroweak vacuum turns out to be metastable (which last I heard, it probably is). And there's a chance more stuff might crop up once we crack quantum gravity.

Also, only a 1% chance of affecting the vast future, really? I suspect that's underselling it. Right now, everything from human extinction to a paradise populated by considerably more than a nonillion people looks possible to me, and which one we get probably depends very strongly on actions taken within this century.

Expand full comment

>"But the future (hopefully) has more people than the present. MacAskill frames this as: if humanity stays at the same population, but exists for another 500 million years, the future will contain about 50,000,000,000,000,000 (50 quadrillion) people. For some reason he stops there, but we don’t have to: if humanity colonizes the whole Virgo Supercluster and lasts a billion years, there could be as many as 100,000,000,000,000,000,000,000,000,000,000 (100 nonillion) people."

The main threat we face may be the reverse:


Expand full comment

Regarding section IV. and Counterfactual Mugging:

You assume that there is no contest of resources (not possible) and that the happiness of people is not an interaction (which I think it is wrong). Happiness is a relative term and even that is a 'resource' If there is one person with happiness 80 and all of a sudden another appears with happiness 100, that 80 may go down to 60 just because the 100 appears. Or it may go up to 90 if they hook up. You are much happier being middle class in Africa surrounded by poorer people than being poor in the US surrounded by richer people.

What I want to say is that simple utility functions don't work except in academic papers or when paying students to switch coffee mugs with pens.

Expand full comment

I would love to read somewhere a more detailed analysis of the "drowning child" thought experiment. Is it actually valid to extrapolate from one's moral obligations in unforeseen emergency scenarios, to policies for dealing with recurring, predictable, structural problems? If so, can we show that rigorously? If not, why not?

Expand full comment

As I see it, at this point all long termism debate is about resolving the philosophical issues caused by assuming utilitarianism. It's probably a worthwhile idea to explore this, but I don't understand why is this important in practise at all? Isn't the one main idea behind EA to use utilitarianism as much as possible, but avoid the repugnancies by responding to Pascal's muggings with "no thank you I'm good"? Practical long termism looks morally consistent. I think it's barely different from EA-before-long-termism. x-risks are very important because we care about future people, but the future people are conditional not only on us surviving but also growing as a civilization. The latter is pretty much EA\{x-risks}, so we're just left with finding the optimal resource assignment between survival and growth. I imagine survival has significantly diminishing returns past a certain amount of resources and even astronomical future people numbers won't make the expected outcome better.

Expand full comment

The future potential people are really not my problem, nor anything I can solve. We definitely want to avoid nuclear war (which remains the biggest threat) but that’s in part because it affects us now. Back in 3000 bc they had their own worries and couldn’t be expected to also worry about the much richer people of the future. I get that the future might not be richer if technology slows but there’s little the average guy can do about that.

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

But you cannot rate the worth of different people's lives on a numerical scale, so the whole thing is nonsense from start to finish.

Expand full comment

I feel the Repugnant Conclusion is fine as a conclusion if it's seen as a dynamic system, not static. If there's a trillion people with the *potential* to build something magical in the future that's probably better than 5 billion 100 utilon people. It's the equivalent to (perhaps) the 17th/ 18th century world but much much bigger (which would help increase progress), compared to a much more stagnant world of only the richer parts of the world in 2040.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I didn't preorder the book, mostly because I suspect I've already internalised everything it says, but also because I don't think the philosophical debate over how much we value the future is as interesting or relevant as the practical details.

Regardless of your moral system, if there are concrete examples of things we can do now to avert disaster or cause long-term benefit, I think people will be in favour of doing them - maybe it's a utilitarian obligation, maybe it's just because it seems like the kind of thing a wise and virtuous person would do. The value of future generations maybe factors in when considering trade-offs compared to focusing on present issues but it's a little ridiculous when all the longtermists end up being mostly concerned with things that are likely to happen soon and would be really bad for everyone currently alive.

"We should do more to address climate change", "we should carefully regulate Artificial Intelligence", and "we should invest in pandemic prevention" are all important ideas worthy of being debated in the present on their own merits (obviously not every idea that's suggested will actually help, or be worth the cost), and I think framing them as longtermist issue that require high-level utilitarianism to care about is actively harmful to the longtermist cause.

The best analogy I have is that the longtermists are in a cabin on a ship trying to convince the rest of the crew that the most important thing is to be concerned about people on the potentially vast number of future voyages, then concluding that the best thing we can do is not run into the rocks on the current voyage. The long-term arguement feels a little redundant if we think there's a good chance of running aground very soon.

Expand full comment

A lot of this “moral mugging” (great term btw) logic reminds me of a trick seasoned traders sometimes play against out-of-college hires. They offer them the following game, asking how much they’ll pay to play:

You flip a coin. If it’s heads you win $2. If it’s tails the game ends.

If you won “round 1” we flip again in “round 2.” If it’s heads, you win $4. If it’s tails, the game ends and you collect $2. In round 3, it’s $8 or you collect $4. Continue until you flip tails.

The expected value of this game is infinite: 1/2 * 2 + 1/4 * 4 + 1/8 * 8 …

Junior traders thus agree to offer the senior ones large sums to play and… always lose. Because there isn’t infinite money (certainly the senior trader doesn’t have it) and if you max out the payment at basically any number the game’s “true” expected value is incredibly low.

The connection here is that strict “moral models” are deeply brittle, relying on narrow and unrealistic assumptions about the world while often ignoring the significance of uncertainty. Following them as actual guides to behavior as opposed to idle thought experiments always strikes me ill-advised and, frankly, often creepy, as such models have a tendency to be usable to justify just about anything…

Expand full comment

If someone wants me to accept some kind of variation on the repugnant conclusion, all they have to do is go out and find me one person with happiness 50 and fifty people with happiness 1 so I can check them out for myself.

This is, of course, impossible. People blithely throw numbers around as if they mean something, but it's not possible to meaningfully define a scale, let alone measure people against it. And even if you manage to dream up a numerical scale it doesn't mean you can start applying mathematical operations like sums or averages to them; it's as meaningless as taking the dot product of salami and Cleveland.

The bizarre thing is that everybody fully admits that obviously you _can't_ go around actually assigning numbers to these things, but then they immediately forget this fact and go back to making arguments that rely on them.

You can't even meaningfully define the zero point of your scale -- the point at which life is _just_ worth living. And if you can't meaningfully define that, then the whole thing blows apart, because imagine you made a small error and accidentally created a hundred trillion people with happiness of -0.01 instead of creating a hundred trillion people of happiness +0.01.

tldr: ethics based on mathematically meaningless combinations of made-up numbers is stupid and everyone should stop doing it.

Expand full comment

My problem with the Repugnant Conclusion is that its conclusions depend on the worlds it describes being actually possible. There might be certain universal rules that govern all complex systems, including social systems. Although we don't currently know what these could be, I believe they are likely to exist and that the world described in the RC would be thereby forbidden. If this is the case the RC argument is premised on an impossibility, equivalent to starting a mathematical proof with a squared circle, and hence its conclusions have no merit in our reality.

Expand full comment

Per the review, the book seems to take as a given that poorer == less happy, but the country comparison data I've seen suggests that's not true, or at best a wild oversimplification. Does the book flesh out this argument?

In the absence of this, the repugnant conclusion's logic seems difficult to map to reality.

I continue to like utilitarianism, philosophically, but no definition of "quals" maps to the rich diversity of preference and experience in reality.

Expand full comment

Scott — I'm not sure male pattern baldness should count as a "medical condition". Is having eyes of different colours a condition? Is red hair a condition? Baldness is just a physical trait. Many people find baldness attractive (attractive enough that shaving your hair even if you're not bald is a thing). Any badness relating to being bald is socially-constructed and contingent, and I don't think it should be talked about in at all the same category as lung malformations.

Expand full comment

Avoiding counterfactual mugging has much the flavor Luria's Siberian peasants, and their refusal to engage in hypothetical reasoning.

Expand full comment

The best argument I ever read against the Repugnant Conclusion is the idea of a Parliament of Unborn Souls. It goes like this:

If we imagine the echoes of all possible future humans voting among themselves on who wants to get to be born, vs. sacrifice lower their odds in favour of a having a better life if they *do* get born — well, the Repugnant Conclusion would be laughed out of the room. The sum total of all possible future people overwhelmingly exceeds the sum total of all people who could possibly be born in the physical world (taking into account all possible sperm/egg combinations). Voting for the Repugnant Conclusion wouldn't meaningfully increase anybody's chances of being born, while it would drastically lower the expected value of life if they do get lucky.

I guess this is an answer to a version of the R.C. that phrases itself in terms of "potential people have moral value and should get to exist", rather than "bringing people into existence has moral value". But the latter as distinct from the former seems, for lack of a better term, bonkers. (I guess this is just Exhibit ∞ in me being a preferentialist and continuing to feel deep down that pure-utilitarians must be weird moon mutants.)

Expand full comment

Just to nitpick: Stalin did not say that "1 million deaths : just statistic"-thing. An unnamed French said it (allegedly - about 100.000 war-deaths) - and was quoted by German leftish/satirical writer Kurt Tucholsky in 1925. - Statistics were important to Stalin - when statisticians showed him the population-numbers for Ukraine et al after the famine, he had those number classified. And the statisticians executed.

Expand full comment

Sorry if this is well trod territory but I'm no philosopher: Doesn't that Parfit thought experiment about the survival of humanity imply a lot about what his views should be on contraception? If the non-existence of an unborn person is morally equivalent to (or, you know, worth any significant percentage of) murdering a living person, then does he consider abortion murder?

Expand full comment

another example of intransitive preferences: currently people have to work unpleasant jobs and any new automation is a great change. But if you keep adding automations, eventually humans don't have any non-artificial struggle at all and at that point it seems kinda pointless to me.

Expand full comment

Before reading the rest of this, I want to register this bit:

> Is it morally good to add five billion more people with slightly less constant excruciating suffering (happiness -90) to hell? No, this is obviously bad,

My intuition straightforwardly disagreed with this on first read! It is a good thing to add five billion more people with slightly less constant excruciating suffering to hell, conditional on hell being the universe you start with. It is not a good thing to add them to non-hell, for instance such as by adding them to the world we currently live in.

Expand full comment

Long termism (or even mid-termism) have one huge drawback: The advice giver (the philosophe, activist, or more importantly politician) will not be there when the results of the advice can be judged.

Like all the future trade off (suffer now for a brighter future), is is inherently scam-like ( 'A Bird in the Hand is Worth Two in the Bush' is not meaningless). It's not sure scam, but it needs to be minutely examined, even for short term advices: Is the adviser accountable in some way on the results? And more importantly, is the adviser in a special position where he would profit from the proposal sooner or more, contrary to the average guy who is asked to suffer near term in exchange of longer term benefit? If it is the case, if the adviser do not suffer at least as much in the short term as an average advisee, it is a scam.

I did not always though like that, but those last decades have been a great lesson, the whole western world is soooooo fond of this particular scam it's everywhere. I guess it taps in a deeply embedded catholic guilt+futurism.

Expand full comment

I don't think the Charcoal thing is a good argument. We can only get about 1 watt of energy for industry for each square meter devoted to forest land. When you have a population constrained by the amount of agricultural land and you also need wood for tools, houses, and heating in addition to industry then being limited to the energy you can get from charcoal for industry is going to essentially prohibit an industrial revolution.

Expand full comment

What does “discovering moral truths” mean? Is the author a moral realist?

Expand full comment

Ugh. This was not a good "book review".

I've come to the conclusion that all of the book reviews so far are pretty bad because they are all too long. After 1200 (maybe 800) words, the payoff should be increasingly better the longer it gets.

This might be reflective of the entire blogging endeavor; there are no newspaper editors telling writers to shorten and tightened it up because space is limited. As a result, the quality is just not high and we'd be better off finding already published reviews.

As I vaguely recall, the poet John Ciardi remarked: that which is written without hard work is usually read without pleasure.

I suggest that distillation is hard work that makes writing better. Could the book reviewers start doing some hard work? Redo all of these and give us your best 1000 words.

D+, revise and resubmit.

Expand full comment

Why do we assume that the morally neutral level of life is objectively defined? I think standards change and depend upon all sorts of things, including the average quality of life across the population. So suppose we consider the blissful planet, with 5 billion people whose happiness is distributed normally with average of 100 and variance of 1. Then by their standards, people with happiness of 80 would be way off down from what they consider normal (20 standard deviations below average which they see around them!), and they would not even consider adding those people to the population. So the trick here is letting us, from our present time, decide whether to add those 80-happiness people. But it is like asking a slave on an ancient subsistence farm to decide whether to add some poor people to modern population. By the slave's standard, having access to cheap fast food and limited hours of work week would make them really happy, and they would obviously be happy for more people like this to exist, but in modern realities those added people might feel really unhappy, have loads of stress, and keep themselves from suicide by moral convictions. Similarly, it is people of the heaven that should decide which people should be added to heaven, not us here with our abysmally low standards (compared to theirs).

Expand full comment

I use a discount rate in considering the future. For financial considerations, a discount rate is composed of (1) a risk free return plus (2) a margin for uncertainty of return.

For purposes of considering the future of humanity, I agree that only (2) makes sense. But (2) can be quite small as a % and still make the present value of affecting the distant future quite small. Not because future people are empirically less important than current people, but because of the uncertainty, or contingency, of how we can affect the far off future from investments today.

This line of reasoning leads me to want to focus my own "altruism" on issues affecting the present.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I want to preach the benefits of stable state analysis heuristic, which was called Kantian categorical imperative in its previous life:

How will the society look like if an action preferred by your ethical theory were an universal societal rule?

The other version put forward by Kant, "treat person always as an end, not as a means to end" is also useful, though I am less certain of Kant's claim that is essentially derived from the first principle.

I find it much more productive way to think about ethics. Now instead of just thinking "imagine a world 5 billion people in Hell, what if we can magically add 5 billion more people", you have to consider the actions to get from world A to world B.

The various repugnant conclusions become much more implausible. Basic version suggests that everyone should make as many kids as possible, because more utils experienced is always better. I don't think the society would be workable, if for nothing else that there are limits to carrying capacity and the society would eventually collapse. It would also make a society where many other moral imperatives become difficult to follow ("do not knowingly create situations where famines are likely", for instance).

And finally, such calculus also fails by the second criterion, as it views everyone currently alive at any time point T = "currently" more as incubators for the next generations of utils (their own experienced utils become overwhelmed by all the potential utils experienced by N >> (large number) future generations).

Naturally the imperatives can not be exhaustively calculated, but that is just a sign that ethics is an imperfect tool to human life, not that human life is subservient to a method. Hopeful, the rules they can be iteratively refined ("get the British government buy all slaves free if it is possible"). And I think "imperative calculus" would find it is good / necessary to help a drowning child / suffering third world person as long as the method of helping doesn't become dystopian. (Dystopian utilitarianism would allow for "if you can't swim, throw a fat person who can under a tramcar until someone saves a drowning child". I think one of the salient imperatives is, "as much people should learn to swim, help people, and call others help".)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

The glass shards example seems to me not to rely utilitarian or deontological reasoning. It hinges on the observer's emotive reaction, and the conclusion is that one should never do anything that may have consequences we would feel social regret about. The reason to clean up the glass is because we are thinking ethically, and not to clean it will leave us with doubt and a sense of guilt, whether a child comes along or not. That fits deontology (conforms to our sense of duty), utility (increases our happiness with certainty), intuitionism (just seems wrong to move on without doing it), and emotivism (makes us feel better ethically). Utilitarian reasoning would need to rest not just on a chances-of-benefit calculation, but on a cost-benefit calculation--what is the long term cost of slowing one's journey to clean the glass, delaying and ultimately precluding some "better" use of the time?)

The argument about technology seems to assume that wealth and happiness are linked in some quantitative fashion.

The argument concerning wiping out almost all people vs. wiping out all people (and precluding the birth of further future people) seems based on treating potential people as people, not future people as people. If we owe a debt to other people by virtue of being social beings, we should consider part of that debt due to people who are actual, except not yet born. But people who will never be born are not future people. To treat them as equally due a debt is a step even beyond absolutist pro-lifers, who consider the person a human being from the moment of physical conception--now these non-people are due full ethical standing from the point of intellectual conception.

The argument about immigration and culture change seems to me to make no sense. There is no reason to think that changing culture leads to less happiness, or is a negative good on other grounds, even if those who resist are called names. The fact that it causes temporary social discomfort doesn't mean it is not superseded by long-term net social good (which is how our national narrative treats the immigrant wave of the late-19th and early 20th century -- whether it's "accurate" or not, it is certainly a plausible interpretation).

Expand full comment

Is the repugnant conclusion just the paradox of heap? Is there a version with a less vague predicate (happiness)?

Expand full comment

The Sophists proved via reductio ad absurdum that philosophy is useless, mere rhetorical word tricks, to obfuscate the truth, which is why they were vilified by Plato and his ilk. Don’t feel bad about disagreeing with philosophers. Whatever was of value was extracted in the form of mathematics or science a long time ago.

As for abolitionism, Bartolomé de las Casas (1484-1566) was way ahead of any Quaker or Quakerism itself.

Expand full comment

> I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice.

As soon as I read this, I had to jump down here to remind everyone about Timeless Decision Theory and Newcomb's problem: https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality

Expand full comment

I believe in The Copenhagen Interpretation of Ethics which says that when you interact with a problem in any way, you can be blamed for it.

That's why I feel responsible to prevent utilitarians inflicting harm on children in my presence, but I'm indifferent about whatever happens 100 years from now.

Après moi, le déluge.

Expand full comment

You link to an article on counterfactual mugging, but what you describe here is not counterfactual mugging at all. Counterfactual mugging is when someone flips a coin, and on heads rewards you if you would have paid a penalty on tails.

Expand full comment

*>tfw forget a book review is written by Scott, not part of Book Review Contest

*I can't decide if it would be incredible or insufferable if Scott hired a publicist. So-And-So, potential author of hypothetical books, currently beta-testing in blog format.

*Octopus factory farming: Sad! Doesn't even taste good compared to (brainless idiot pest species) squid. And that's without factoring in the potential sentience, which really makes my stomach churn on the few occasions I do eat it begrudgingly...

*The Broken Bottle Hypothetical is weird...I feel happy and near-scrupulosity-level-compelled to clean up my own messes. But I harbour a deep resentment for cleaning up the messes of others. It just seems to go against every model of behavioral incentives I have...at some point, "leading by example" becomes "sucker doing others' dirtywork". (Besides that - who *wouldn't* pick up their own litter when out in the wilds? I've never understood that mindset...one doesn't have to be a hippie to have a little basic respect for nature. Also, Real Campers Use Metal, among other things to avoid this exact scenario.)

Like I get the direction the thought experiment is intended to go...but many "broken bottle" behaviours have intrinsic benefits in the here-and-now. Cooking with a gas stove or driving with a gas car are pretty high-utility for the user, even if deleterious on the future. What's the NPV of not picking up broken glass? (Yes, probably making too much hay out of nitpicking a specific example.)

Expand full comment

I'm glad to see Scott share this, even though many in the EA community are uncomfortable criticizing EA in public (I myself am victim to this - I omitted to rate WWOTF on Goodreads for fear of harming the book's reach).

Simply put, WWOTF is philosophically weak. This would be understandable if the book was aimed at influencing the general public, but for the reasons Scott mentions in this post, WWOTF doesn't offer any actionable takeaways different than default EA positions... and certainly won't be appealing to the general public.

The problem with all this is that WWOTF's public relations campaign is enormously costly. I don't mean all the money spent on promoting the book, but rather, WWOTF is eating all the positive reputational capital EA accumulated over the last decade.

This was it. This was EAs coming out party. There will not be another positive PR campaign like this.

The problem with this is that the older conception of EA is something most public intellectuals/news readers think very highly of.

Unfortunately, the version of EA that MacAskill puts forwards is perceived as noxious to most people (see this review for context: https://jabberwocking.com/ea/ - there are tons like it).

It seems like WWOTF's release and promotion doesn't accomplish anything helpful while causing meaningful reputational harm.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

>If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity,

The philosophers have gotten ahead of you on that one. Surprised you haven't already read it, actually.


It's a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion ("a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of 'larger'"), the Sadistic Conclusion ("it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of 'larger'"), the Anti-Egalitarian Conclusion ("for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better"), or the Oppression Olympics ("all improvement of people's lives is of zero moral value unless it is improvement of the worst life in existence").

This proof probably has something to do with why those 29 philosophers said the Repugnant Conclusion shouldn't be grounds to disqualify a moral accounting - it is known that no coherent system of utilitarian ethics avoids all unintuitive results, and the RC is one of the more palatable candidates (this is where the "it's not actually as bad as it looks, because by definition low positive welfare is still a life actually worth living, and also in reality people of 0.001 welfare eat more than 1/1000 as much as people of 1 welfare so the result of applying RC-logic in the real world isn't infinitesimal individual welfare" arguments come in).

(Also, the most obvious eye-pecking of "making kids that are below average is wrong" is "if everyone follows this, the human race goes extinct, as for any non-empty population of real people there will be someone below average who shouldn't have been born". You also get the Sadistic Conclusion, because you assigned a non-infinitesimal negative value to creating people with positive welfare.)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

[disclaimer: I'm dumb and I don't really know anything]

Thanks for this review, a nice summary on some of the core points of EA and long termism.

One question troubles me as for the "the most important thing ever is your choice of career. You should aim to do the maximum good, preferably earning to give / become an influential leader to change policy / becoming a top AI specialist to solve alignment / etc."

These guidelines are explicitly aimed at talented people. I remember 80kh being very open about this in the past; it seems that somewhere along the line they've altered their front page material on it. But obviously these points mostly concern talented people. Most people will not become scientists, high level engineers, leaders or influential activists.

Where does this leave normal people? What should most people do with their time? "Well duh, that which they can best do to advance the greatest good ever." Ok, but what is that for, say, a normie who can learn a profession, but whose profession is relatively boring and doesn't have anything to do with any of the aforementioned noble goals? What is the greatest utility for a person, who is ill-equipped to cognitively even grasp long-termism properly? Or for a person who does get the point, but who has no business becoming [an influential effective altruist ideal]? And so on.

Lacking an answer (granted, I haven't spent a very long time looking for one), for the time being the advice to look for me most insanely profitably successfully extremely bestest way to increase the number of people alive to [a very high number] seems to me lopsided in favor of very talented people, while simply ignoring most people everywhere. In making EA go mainstream, this might matter - maybe?

Expand full comment

Have we considered that there is a middle ground between "future people matter as much as current people" and "future people don't matter at all"? If you want numbers you can use a function that discounts the value the further in the future it is, just like we do for money or simulations, to account for uncertainty.

I imagine people would argue over what the right discount function should be, but this seems better than the alternative. It also lets us factor in the extent to which we are in a better position to find solutions for our near term problems than for far-future problems.

Expand full comment

I am not sure if I understood the Repugnant Conclusion thing correctly. Is the setting that we are given two alternative universes: 1 with a small population of very happy individuals, and 1 with a very large population of not so happy individuals? And is the issue that most people would rather ACTUALLY LIVE in the first universe, because then they would be happier themselves?

I can also imagine something about scope neglect I guess. A large population may be very valuable and each of those people are unique and special, have their own friends and families, hopes, dreams, etc. But intuitively it sure feels like the difference between 1,000,000 and 10,000,000 people isn't so big, after all it's more people than I could ever imagine interacting with,

Expand full comment

I notice that as soon as we start treating future people as already existing, calculations become messy. Be it anthropics reasoning which assumes that we are randomly selected from all humans that has ever or will ever live, or moral reasoning which passes the buck of utility to future generations.

I can clearly point where is the error of such anthropic reasoning. I'm less certain what's wrong with total utilitarianism. There should be some discounting based on the probability of future humans existing but it's not just that. I guess it just doesn't fit my moral intuition?

Imagine a situation where I know that all my decendants for n next generations will have terrible lives. Lets say there is some problem which can't be fixed for the next many years. But also I know that in at some moment humanity will fix this problem and thus strating from n+1 generation, my decendants will have happy lives. Am I thus morally obliged to create as many decendants as possible? Are my decendants of k-generation facing an even harder situation: if they decide not to breed they are retroactively making me and their relatives from k-1 generations terrible people? Eventually, whatever disutility from the n generations of suffering were accumulated would be outweighted by the utility of n+1 and futher generations. But what's the point? Why not just let people without this problem reproduce and have happiness in all the generations to come?

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Am I the only one who thinks B is clearly better than A with regards to nuclear war? More or less same technological development with 10% of the population so great potential for growth and less zero sum games?

Expand full comment

B->C seems like the more sensible place to get off the repugnant conclusion train than A->B, since that’s the step that actually involves making (some) people worse off.

In your immigration analogy, that corresponds to letting immigrants in but not changing society to accommodate them, which seems much better than not letting immigrants in at all.

Expand full comment

I have a couple of thoughts and I'm not sure which is more likely to start a fight.

1. A sufficiently creative philosopher can construct an ironclad argument for pretty much any conclusion, and which of them you choose is down to your personal aesthetic preferences.

2. The reason abolition of slavery came so late was that for most of human history, being a slave wasn't that bad, relative to being pretty much any other person. Industrialization turned slavery into a practice too reprehensible to survive. Even Aristotle would have looked at the Antebellum South and said hey, that's kinda fucked.

Expand full comment

The issue I always have with ultralarge-potential-future utilitarian arguments is that the Carter Catastrophe argument can be made the same way from the same premises, and that that argument says that the probability of this ultralarge future is proportionately ultrasmall.

Imagine two black boxes (and this will sound very familiar to anyone who has read *Manifold: Time*). Put one red marble in both Box A and Box B. Then, put nine black marbles in Box A and nine hundred ninety-nine black marbles in Box B. Then, shuffle the boxes around so that you don't know which is which, pick a box, and start drawing out marbles at random. And then suppose that the third marble you get is the red marble, after two black ones.

If you were asked, with that information and nothing else, whether the box in front of you was Box A or Box B, you'd probably say 'Box A'. Sure, it's possible to pull the red marble out from 999 black ones after just three tries. It *could* happen. But it's a lot less likely than pulling it out from a box with just 9 black marbles.

The biggest projected future mentioned in this book is the one where humanity colonizes the entire Virgo Cluster, and has a total population of 100 nonillion over the course of its entire history. By comparison, roughly 100 billion human beings have ever lived. If the Virgo Cluster future is in fact our actual future, then only 1 thousand billion billionth of all the humans across history have been born yet. But, the odds of me being in the first thousand billion billionth of humanity are somewhere on the order of a thousand billion billion to one against. The larger the proposed future, the earlier in its history we'd have to be, and the less likely we would declare that a priori.

If every human who ever lived or ever will live said "I am not in the first 0.01% of humans to be born", 99.99% of them would be right. If we're going by Bayesian reasoning, that's an awfully strong prior to overcome.

Expand full comment

As minor as quibbles can be, but:

"each new person is happy to exist but doesn’t make anyone else worse off."

Is there a reason this is a "but" instead of an "and"? As if people being happy usually make others worse off?

Expand full comment

I've linked to Huemer's In Defence of Repugnance in the comments to another post, but it's so on-topic it makes sense to do so here:


As I noted there, Huemer is not a utilitarian but instead a follower of Thomas Reid's philosophy of "common sense".

There really doesn't seem to be any reason to believe it's "neutral to slightly bad to create new people whose lives are positive but below average", which would cause the birth of a child to become bad if some utility monster became extremely happy.

Expand full comment

A post wherein Scott outs himself as one of those people who choose only box B in Newcomb's paradox.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

I haven't seriously struggled with repugnant conclusion style arguments before (Mostly I've decided to ignore them to avoid the aforementioned mugging effect), so what I'm about to write is probably old hat. Still, I'd like to hear people's thoughts.

What if you have the following options:

A) 5 billion people, today, at 100% happiness, then the universe ends

B) 10 billion people, today, at 95% happiness, then the universe ends

C) 5 billion people, today, at 97% happiness followed by another 5 billion people at 97% happiness 50 years later, then the universe ends

I think most people would agree that option C is better than option B. If we're thinking in bizzare, long-termist views anyway, there is likely some sustainable equilibrium level of population such that you can generate 100% happiness for an arbitrary number of person-years. You just might have to have fewer people and wait longer years. So let's... do that, instead of mugging ourselves into a Malthusian hellscape.

If you object that the lifetime of the universe is finite, and so the number of person-years in the above scenario is not arbitrarily high, I would respond with something along the lines of "Yeah, sure, but if humanity survives until the heat death of the universe, I'm pretty sure the people alive at that time won't be bummed out that we didn't maximize humanity's total utility. They won't be cursing their ancestors for not having more children. It's not like they'd decide that maximizing total utility was the meaning of life and we fucked it up all along."

Expand full comment

- There are only 10^67 atoms in our lightcone; even if we converted all of them into consumer goods, we couldn’t become 10^86 times richer.

Warning: rambling.

Most of the value in the modern economy does not come from extracting resources, but rather from turning those resources into more valuable things. The raw materials for an iPhone are worth ~$1, whereas the product has 1000x that value. There is probably a limit to how much value we can get out of a single atom, but I think we can still get a better multiplier than 1000x!

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

Sorry for the nit-picking, but the below doesn't follow from the link:

"Octopi seem unusually smart and thoughtful for animals, some people have just barely started factory farming them in horrible painful ways, and probably there aren’t enough entrenched interests here to resist an effort to stop this."

Link just says "there are no released standards for how the octopuses are going to be kept and raised, nor for how they will be slaughtered."

Now maybe the Spanish octopus farmers will do horrible, Snidely Whiplash moustache-twirling, evil octopus farming. Or maybe they will be constrained under EU animal welfare standards. It's no skin off my nose either way, because I've never eaten octopus and have no intention of ever doing so. But this is what is annoying: trying to force us to accept the conclusion that *of course* it will be 'horrible painful ways' because eating meat (do octopi count as fish?) is evil and wicked and immoral, and factory farming is evil and wicked and immoral, and fish farming is factory farming hence is evil and wicked and immoral.

I don't know how smart octopi are, they seem to be smart in some way, and probably smarter than a cow (a low bar). But here's the thing: I am not yet convinced eating octopi is morally evil. And I know dang well that it's not just the octopus farming this campaign would like to stop, it's fishing for wild octopus and eating them at all.

Let's wait and see if the wicked, bull-fighting, blood-thirsty Spaniards *are* going to torture sweet, cute, innocent, smart, octopi to death before we start calling for the war crimes tribunal, hmmm?

EDIT: And if the "scientists and conservationists" are so outraged about the intelligent octopi, then surely Ms. Tonkins should quit her job at Bristol Aquarium, rather than being complicit in the enslavement of these intelligent and sentient beings? Did any of the octopi consent to being captured and imprisoned in tanks for humans to gawk at? Liberate all those incarcerated octopi into the wild and take the beam out of your own eye first!

Also, how moral are octopi themselves if experts fear "if there was more than one octopus in a tank - experts say they could start to eat each other". That seems to be that the greatest threat to an octopus is another octopus, not a human.

Expand full comment

If you support the notion of impartiality and accept the concept of intelligence explosions, doesn't this take the oomf out of human-centric long-termism?

Aren't there almost certainly other life forms in the universe that will experience intelligent explosions, making whatever happens in our story irrelevant?

Who cares if we cant interact with the regions in space they are located as long as they are experiencing lots of positive utils.

Expand full comment

> I realize this is “anti-intellectual” and “defeating the entire point of philosophy”. If you want to complain, you can find me in World A, along with my 4,999,999,999 blissfully happy friends.

The philosopher Massimo Pigliucci on the Rationally Speaking Podcast did something like this once when he was confronted with the vegan troll bit about bestiality. You're against bestiality right? Because it's bad to sexually assault animals? Well, if you think that's bad, then you must definitely be against eating them.

He retorted that he just didn't feel it necessary to be morally consistent. 🤯

Expand full comment

> Is this just Pascalian reasoning, where you name a prize so big that it overwhelms any potential discussion of how likely it is that you can really get the prize? MacAskill carefully avoids doing this explicitly, so much so that he (unconvincingly) denies being a utilitarian at all. Is he doing it implicitly? I think he would make an argument something like Gregory Lewis’ Most Small Probabilities Aren’t Pascalian. This isn’t about an 0.000001% chance of affecting 50 quadrillion people. It’s more like a 1% chance of affecting them. It’s not automatically Pascalian reasoning every time you’re dealing with a high-stakes situation!

Whenever I hear things like "What We Owe The Future" and "dectillions of future humans", I think "ah, the future is a utility monster that we mere 7 billion humans should sacrifice everything to".

The utility monster is a critique of utilitarianism.

Suppose everyone gets about one unit of pleasure from resource unit X. But there exists a person who gets ten billion pleasures from unit X. As a utilitarian you should give everything to that person because it would optimize global pleasure.

In this case, the future is the utility monster because there are so many potential humans to pleasure with existence. Spending any resources on ourselves instead of the future is squandering them. We are the 1%. But actually we are the 0.000001%

Expand full comment

What concerns me about this concept, at least as it has been presented by my peers who are into long-termism, is the accuracy of their predictions. Your actions now have some moral consequence down the line. My question is, how accurate therefore are your predictions, spanning long into the future, that your very rational utilitarian decisions will actually lead to positive outcomes and not negative? We are pretty darn bad at even near term predictions (see Michael Huemer on the experts and predictions problem); so making and explicit statement to live your life in some particular way because you are confident your predictions about how your life will impact humanity and the universe eons into the future just seems silly. In fact, it seems worse than silly, it seems like a load of hubris that is just as likely to be harmful down the line as good, but we will all be dead and no one can call you on it when the consequences occur, conversely, we are all alive now and have to hear how very moral and virtuous long-termism is today by its practitioners.

Expand full comment

This is your regular reminder that nuclear weapons are not an existential risk and never have been, nuclear winter is mostly made up, and we have the technology to build missile defense systems that would make the results of a nuclear war much less bad (although still bad enough that people will want to avoid having one).



Expand full comment

Just a few paragraphs in, and I'm thinking to myself "Thank you for reading and reviewing this book, so now I need not waste my time on it." That, in itself, raises this review several positions in the ranking of reviews so far!

Expand full comment

I still think that naively adding hedons - or utils - or whatever you call them nowadays is not the right approach.

Thought experiment : let’s say that you are pretty happy, and worth 80 "happiness". Now I participate to an experiment when I’m put to sleep, get cloned n times, and me and my clones are put in identical rooms where I can enjoy a book of my choosing after waking up. Under classic utilitarianism, the experiment has created 80*n "happiness". Which sounds wrong to me : as long as me and my clones are identical, no happiness has really been created ; identical clones have no moral value. Generalizing this, addition of happiness should discount for having similarity with other existing individuals.

Expand full comment

I still don't find the repugnant conclusion repugnant, or even surprising. Either a certain level of existence is better than nonexistence, or it isn't. If it's better, let's get more existence!

I think a lot of people have two thresholds in mind: there's the level of existence at which point it's worth creating a new life, and there's a separate, lower one at which point it's worth ending an existing life. But then it's just treating existing lives differently from potential ones.

The biggest objection, to me, is one I never see people raise, and that's the obligation to have more kids. I only have one, and might have a second, but I easily could have 4 by now, and could probably support much more than that at a reasonably high standard of living, so if I really buy the repugnant conclusion, I should be doing that. But I don't, so update your priors accordingly.

Expand full comment

Thanks so much for highlighting my interview of him Scott!

Expand full comment

With regard to the Repugnant Conclusion, I think that one way out is that the weighting of factors determining the utility is somewhat arbitrary, so one can move the zero line to what one considers an acceptable standard of living.

Suppose I assign -1000 for the lack of access to any of: clean water, adequate food or housing, education, recreation, nature, potential for fulfillment etc. Now adding with about zero net util does not seem to bad. In fact, not adding them just to preserve a few utils for the preexisting population would feel wrong -- like hypothetical billionaires (or Asimov's Solarians) preferring to keep giant estates which could otherwise be suburban districts providing decent living for millions.

What life is considered worth living is very dependent on the society. I gather that ancient Mesopotamians probably did not consider either freedom of speech or antibiotics essential, given that they had (to my knowledge) neither concept. For most people living in the middle ages, Famine, War and Pestilence were immutable facts of life along Death. From a modern western point of view, at least two of the horsemen are clearly unacceptable and we work hard to fight the third one. EYs Super Happy People would consider a life containing any involuntary suffering to be morally abhorrent. Perhaps after we fix death, only supervillains would even contemplate creating a sentient being doomed to die.

Of course, this also seems to contradict "Can we solve this by saying that it’s not morally good to create new happy people unless their lives are above a certain quality threshold? No."


Also, I get a strong vibe of "Arguments? You can prove anything with arguments." ( https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/ ) here from Scott with regard to philosophical muggings.


Finally, in long term thinking, extinction is hardly the worst case. The worst case would be that due to value misalignment, someone future being would turn the inner part of the light cone going from Sol, 2022 CE into sufferonium -- turning the reachable universe into sentient beings which have negative utility according to our values.

Expand full comment

“As far as anyone can tell, the first abolitionist was Benjamin Lay (1682 - 1759), a hunchbacked Quaker dwarf who lived in a cave. He convinced some of his fellow Quakers...”

Now this is just not true. Slavery was largely abolished in mediaeval Europe. And often by Catholics. And the invaders of Britain, the Normans, ended it there. However the Normans are looked at with hostility, as is Catholicism in Anglo historiography

Expand full comment

There are three things that grate me in this review (or, may be, in the book as well, I am yet to read the book). All three have to do with exponentials.

1. The hockey stick chart with world economic growth does not prove that we live in an exceptional time. Indeed, if you take a chart of a simple exponential function y=exp(A*x) between 0 and T, then for any T you can find a value of A such that the chart looks just like that. An yet there is nothing special about that or another value of T.

2. I do not see why economic growth is limited by the number of atoms in the universe. It looks to me similar to thinking in 1800 that economic growth is limited by the number of horses. We are already well past the time when most of economic value was generated by tons of steel and megawatts of electricity. Most (90%) of book value in S&P500 is already intangible, i.e. not coming from any physical objects but from abstract things such as ideas and knowledge. I do not see why the quantity of ideas or their value relative to other ideas would be limited by the number of atoms in the universe. If anything, I could see an argument why it there is growth limit of the number of sets consisting of such atoms, which is much larger (it is 2^[number of atoms]) and, at our paltry rates of economic growth, is large enough to last us until the heat death of the universe.

3. All these pictures with figures of future people are relevant only in the absence of discounting aka the value of time. I do not know if the book ignores this issue but you do not mention it at all in the review. Any calculation comparing payoffs at different times has to make these payoffs somehow commensurate. That's a pretty basic feature of any financial analysis and I am not sure why it would be absent in utility analysis. When we are comparing a benefit of $10 in 10 years time to a current cost of $1, it makes no sense to simply take the difference $10-$1. We should divide the benefit by at least the inflation discount factor exp(-[inflation rate]*10). If we have an option to invest $1 today in some stocks, we should additionally multiply by exp(-[real equity growth rate]*10). When our ability to predict future results of our actions decays with time horizon, we should add another exponential factor. This kind of discounting removes a lot of paradoxes and also kills a lot of long-termist conclusions. This argument gets a bit fuzzier if we deal with utilities and not with actual money, but if the annual increase of uncertainty is higher than the annual population growth rate then the utility of all future generations is actually finite even for an infinite number of exponentially growing generations. So not all small probabilities are Pascalian but ones deriving from events far from the future definitely are! I do not know if this is discussed in the book but any long termism discussion seems to be pretty pointless without it.

Expand full comment

Your comment about slavery going away seems to be false, in that there are credible estimates that there are more slaves today than ever:


Expand full comment

For a good introduction to population ethics (surveying the major options), see: https://www.utilitarianism.net/population-ethics

One thing worth flagging is that MacAskill's book neglects the possibility of parity (or "value blur", as we call it in the section on Critical Range theories, above), which can help block some of the more extreme philosophical arguments (though, as we note, there's no way to capture every common intuition here).

Expand full comment

I'm pretty sure most ACX readers would agree that humans cannot psychologically comprehend the differences between very large numbers causes a lot of unnecessary suffering. Therefore, I find it very confusing and epistemically tenuous that the repugnant conclusion, which involves human intuitions with respect to exceptionally large numbers that we know are completely unreliable, is used to reject principles like more flourishing is good and less suffering is bad.

Expand full comment

Now not nitpicking: Erik Hoel has his fine take on the book out. https://erikhoel.substack.com/p/we-owe-the-future-but-why?utm_source=substack&utm_medium=email He offers some help - i.e. arguments -against the 'mugging' ;) - not just flat out refusing the "repugnant conclusion" (as Scott seems to do) - In the comment section at Hoel I liked Mark Baker's comment a lot: "The fundamental error in utilitarianism, and in EA it seems from your description of it, is that is conflates suffering with evil. Suffering is not evil. Suffering is an inherent feature of life. Suffering is information. Without suffering we would all die very quickly, probably by forgetting to eat.

Causing suffering is evil, because it is cruelty.

Ignoring preventable suffering is evil because it is indifference.

But setting yourself up to run the world and dictate how everyone else should live because you believe that you have the calculus to mathematically minimize suffering is also evil because it is tyranny.

Holocausts are evil because they are cruel. Stubbed toes are not evil because they are information. (Put your shoes on!)" - end of quote -

If you read Scott's post first, good for you: Hoel writes less about the book nor how the "repugnant conclusion" is reached. But he had a long, strong post "versus utilitarianism" just last week, so his review is more kind of a follow-up.

I really do like a lot about EA, and strongly dislike "IA". But I agree with Hoel: "All to say: while in-practice the EA movement gets up to a lot of good and generally promotes good causes, its leaders should stop flirting with sophomoric literalisms like “If human civilization were destroyed but replaced by AIs there would be more of them so the human genocide would be a bad thing only if the wrong values got locked-in.” - end of quote

Expand full comment

Nice review. Definitely some interesting thoughts.

If you recall, I thought that your population article was mistaken because it wasn't accurately weighing potential people. [1] You replied (which I appreciate) to say that you reject the Repugnant Conclusion. You said "I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox." I wrote an article responding to the article, and critiqued possible scaling functions [2].

"If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average. This sort of implies that very poor people shouldn’t have kids, but I’m happy to shrug this off by saying it’s a very minor sin and the joy that the child brings the parents more than compensates for the harm against abstract utility. This series of commitments feels basically right to me and I think it prevents muggings."

Some implications of this view:

1. If no people existed, the average would be 0. In which case, you would have the Repugant Conclusion again.

2. If we set the average value given existing people, it's better to create 1 ever-so-slightly above average person, than tons of ever-so-slightly below average people even if they fully believe their lives are good and worth living.

3. Since the critical value is a function rather than fixed, it will change with the present population. This means that someone who was evaluated as good to produce could later be bad without any aspect of their life changing. While creating a human in 1600 could be regarded as morally good then, it's likely that tons of those lives were below average for 2022 standards. This seems to create odd conclusions similar to asking the child their age after they were cut by the broken bottle.

4. The goodness or badness of having a child is heavily dependent on the existence of "persons" on other planets. If these persons have incredibly good lives, it might be immoral to have any humans. If these persons have incredibly bad lives, it might result in something like the repugant conclusion because they are below 0 and drag the average down to almost zero if they are numerous enough. If you consider animals "persons", then you could argue they suffer so much and are so numerous that the average is below zero.

5. It would be better (but not good) to introduce millions of tormented people into the world rather than a sufficiently larger number of slightly below average people.

6. Imagine we had population A with 101% average utility and a very large population B with 200% average utility which changes the average to 110%. One population is created 1 second before the other. If A comes first then B, it's good to have A. If B comes first, then A, it's bad to have A. The mere 1 second delay creates a very different decision, but partically the exact same world. This seems odd from a perspective where only the consequences matter.

[1] https://astralcodexten.substack.com/p/slightly-against-underpopulation/comment/8159506

[1] https://parrhesia.substack.com/p/in-favor-of-underpopulation-worries

Expand full comment

>> Suppose that some catastrophe “merely” kills 99% of humans. Could the rest survive and rebuild civilization? MacAskill thinks yes, partly because of the indomitable human spirit

Oh no. [survivorship bias airplane.jpg]

Expand full comment

Another scenario:

Suppose god offers you the option to flip a coin. If It comes up heads, the future contains N times as many people as the counterfactual future where you don't flip the coin. (Average happiness remains the same.) If it comes up tails, humanity goes extinct this year. A total utilitarian expectation maximizer would have to flip the coin for any value of N over 2. But I think it is very bad to flip the coin for almost any value of N.

Professional gamblers like me act so as to maximize the expectation of logarithm of their bankroll, because this is how you avoid going broke and maximize the long term growth rate of your bankroll. The Kelly criterion is derived from logarithmic utility.

Would it make any sense to use a logarithmic utility function in population ethics? This could:

1. Avoid the extinction coinflip mugging

2. Avoid the repugnant conclusion because there aren't enough atoms in our lightcone to make into people to multiply the logarithm of the population by a low average happiness and get a bigger utility number than 5 billion happy people.

On the downside it implies you should kill half the population if it will make the remaining people modestly happier.

Expand full comment

Scott, I would imagine that you - like me - are deeply dissatisfied with simply walking away from the moralist who has made an argument for why you should get your eyes pecked out. It seems to me like you’re essentially saying “you fool! Your abstract rules of logic don’t bind my actions!” - and with this statement the entire rationalist endeavor to build a society that privileges logical argument goes out the window.

Is that a fair summary, or is there a deeper justification in the article I missed?

I’ll take a stab at providing one: the common conception of morality encompasses many different systems, and these sorts of arguments confuse them.

System 1: moral intuitions. These can be understood as a cognate of disgust; they are essentially emotional responses that tell us “you can’t do this, it’s beyond the pale”.

System 2: modeling and reasoning about system 1 (moral intuitions). This is the domain of psychology, and involves experiments to figure out exactly what triggers moral intuitions.

System 3: systemic morality. The attempt to construct rules for action that avoid triggering moral intuitions, and that perhaps that maximally trigger some sort of inverse emotion (moral righteousness? Mathematical elegance?). This is the realm of philosophers, with arguments about deontology and utilitarianism. “Mathematics of morality”

The fundamental problem of systemic morality is that our moral intuitions are too complex to model with a logical system. This is pitted against our strong desire to create such a system for many reasons - for its elegance, its righteousness, and for the foundation of society that it could be if it existed.

To bring this idea into focus, imagine another philosophical mugging - but this time plausible. You’ve just left an ice cream shop with your children when a philosopher jumps out of a bush and tells you “I have an argument that will make you hand over your ice cream to me.” You of course object - you’ve just paid for it, and it looks so good - but he says a few words and you hand it over.

What did he say? He walked you through the statistics on contaminants in cream, sugar, and the berries that were likely used to make your ice cream. Then he went into the statistics on worker hygiene and workplace cleanliness, as well as the violation the ice cream shop received two years ago. When he started talking about the health problems caused by sugar and saturated fats you suddenly found you weren’t excited about the ice cream anymore and you handed it over.

Does this mean people shouldn’t eat ice cream? Yeah, it kinda does. But it doesn’t pose any serious philosophical problems for us because we’re not foolish enough to try to systemize our disgust triggers into systems of behavior that we should follow. We can simply recognize the countervailing forces within ourselves, say “I am manifold”, and move on.

I’m not advocating that we should stop trying to systematize our moral intuitions and make them legible within society. Rather I think we should stop expecting these moral systems to work at the extreme margins. They’re deliberately-oversimplified models of something that is extremely complex. We can note where they break down (I.e. diverge from the ground truth of our intuitions) and avoid using them in those situations.

Expand full comment

I am a bit skeptical about the well-definedness of the GDP across the gulf of millennia. How do you inflation-adjust between economies so different? I assume that you pick some principal trade goods existing in both economies (e.g. grain) as a baseline. Grain (or the like) was a big deal of the economy in 1 CE and is today (Ukraine nonwithstanding) not a big deal in the grand scheme of things in the western world: yearly grain production in the order of 2e9 metric tons, times 211 US$ per ton equals some 5e11 US$, about 6/1000 of the world GDP of 84e12 US$.

In ancient times, the median day-wage workers may have earned enough grain to keep them alive for a day or two. Today, by spending 10% of the median US income, you could take a bath in 80kg of fresh grain every other day if you were so inclined.

In fact we should be able to push our GDP advantage over the Roman Empire much further by just spending a few percents of our GDP to subsidize grain or flood the market with cheap low-quality iron nobody wants. Probably a good thing that we do not have intertemporal trade.

Thus, I am not particularly concerned about the GDP being limited by the number of atoms in our light cone (which only grows quadratically). A flagship phone from 2022 worth 800 US$ does not contain more atoms (rare earth elements and the like) than a flagship phone from 2017 worth perhaps 150 US$. The fact that a phone build 100 years from now (if that trend continued) might be worth more than our present global GDP (if we established value equivalence using a series of phone generations) does not bother me, nor the fact that a phone build in 3022 CE might surpass our GDP by 10^whatever. Arbitrary quantities grow at arbitrary speed, film at 11.

Expand full comment

> When we build nuclear waste repositories, we try to build ones that won’t crack in ten thousand years and give our distant descendants weird cancers.

I realize this is seriously discussed by experts, but I'm wondering how it makes sense. It seems like if nuclear waste lasts ten thousand years then it must have a very long half life, so it can't be very radioactive at all?

There's gotta be a flaw in this argument, but I don't know enough about radioactivity to say what it is.

Expand full comment

If I understand correctly, the (overly simplistic version) of the Repugnant Conclusion works like this:

Define utility function U = N * H, where N is number of people and H is happiness. Calculate U for a world A with 1 trillion people with happiness 1 (A = 10^12 people*happiness), and a world B with 1 billion people with happiness 100 (B = 10^11 people*happiness). This leads to the conclusion that an overcrowded, unhappy world is better than a less crowded happy one (A > B), the “Repugnant Conclusion.” Thus, we must either throw out the axioms of utilitarianism or accept the slum world.

This seems like a terrible argument to me, especially this part: "MacAskill concludes that there’s no solution besides agreeing to create as many people as possible even though they will all have happiness 0.001." Why is the utility function linear? This "proof" relies on linearity in N and H, which are NOT axiomatic.

You could easily come to a much less repugnant conclusion by defining something nonlinear. For example, let’s say we want utility to still be linear in happiness but penalize overcrowding. Define U = H * N * exp(-N^2/C), where C is some constant. Now the utility function has a nice peak at some number of people. In fact, we can change U to match our intuition of what a better world would look like.

Expand full comment

Cool article, thanks scott-

Expand full comment

"fighting climate change ... building robust international institutions that avoid war and enable good governance."

MacAskill takes it for granted that these are good things to do, but he might be wrong. Climate change could make us worse off in the long run — or better off. Present global temperatures are high relative to the past few thousand years, low relative to the past few hundred million. Robust international institutions might avoid war. They might also prevent beneficial competition among national institutions and so look us into global stasis.

To make the point more generally, MacAskill seems, judged by the review, to ignore the very serious knowledge problems with deciding what policies will have good effects in the distant future.

Expand full comment

The Old Testament placed limits on slavery, and the Church increasingly limited it for 1500 years - basically until the money wasn't just good, but suddenly amazingly good and more than half of everyone threw their principles in the ocean, overruling the others. The Quakers deserve a lot of credit, but not all of it.

Expand full comment

Another Phil101 class junior high level question:

If the supposed, much larger future population is capable of stability at least comparable to that of today - which it should, in order for us to consider aiming to bring it about - wouldn't it be possible or likely that the exactly same longtermism applied to those people, forcing them to discount their own preferences in order to maximize the utility of a much larger civilization in their far future? If their numbers add up to a rounding error in comparison with the much larger^2 population, it might follow that those people should sacrifice their utility in order to bring about the far future.

And as for the much larger^2 population's longtermist views...

Expand full comment

I think a major point long-termism misses is risk. We discount the future (as in using a discount rate to say how much less we value future money or utility) because we ultimately don’t know what’s going to happen between now and then. A meteor could hit the earth, and then all our fervent long-term investments turned out to be pointless. Or all the other scenarios you could imagine. So the future is worth less than the present, and we should prioritize accordingly. As a rule of thumb, infinite happy people infinitely far in the future don’t matter. That’s not to say we shouldn’t invest in the future, just that we weigh that against a more immediate and certain present.

Practically, this also aligns with Scott’s point that most of the time improving the future is pretty similar to improving the present. Maybe some time soon we can stop torturing ourselves with future infinities and just get back to making things better.

Expand full comment

> Can we solve this by saying you can only create new people if they’re at least as happy as existing people - ie if they raise the average? No. In another mugging, MacAskill proves that if you accept this, then you must accept that it is sometimes better to create suffering people (ie people being tortured whose lives are actively worse than not existing at all) than happy people.

But that's the same as saying that "it's worth suffering in the fight for others' right to die" is problematic in the "zero is almost suicidal" case - quality threshold is just shifting meaning of zero. If conclusion is repugnant then for some scale it's worth creating suffering to avoid it.

Expand full comment

The coal issue seems like a silly distraction. Imagine we evolved on a planet exactly like Earth except there was no coal anywhere. Do you think humanity would stagnate forever at pre-Industrial Revolution technology? A billion years after the emergence of Homo sapiens we're still messing around with muskets and whale oil lamps because we lacked an energy-dense rock to dig out of the ground? Things would surely go slower without coal, but if you're taking a "longtermist" view it seems silly to worry about civilization taking a little longer to rebuild.

Expand full comment

For more on the long history of abolition, The Dawn of Everything [reviewed in the book review contest!] talks about prehistoric California tribes who lived immediately next to each other, some of whom appeared to own slaves and some of whom refused. Oppressing your fellow humans? Refusing to oppress your fellow humans? It's been going on for as long as there have been humans.

And abolition is not a clean line: slave labor still happens in the US, we just call it "prison labor" and look the other way.

As the US has the highest carceral population BY FAR [and, uhh, spoiler: we're not any "safer"...] along with a shocking rise in pre-trial holds since 2000, that seems like the most important "near term" cultural fix on the scale of abolishing slavery. Abolish the carceral state! And if that seems crazy to you, recall that the DOJ's own studies show that prison is not a crime deterrent and imprisoning people likely makes them re-offend more frequently: https://www.ojp.gov/ncjrs/virtual-library/abstracts/imprisonment-and-reoffending-crime-and-justice-review-research

As long as people in the US don't care that marginalized [poor] folks are being oppressed by these systems, we're probably never going to get folks care about hypothetical Future People.

The discussion around this may obviously be different in, say, Norway.

Expand full comment

The main reason why I am not a utilitarian is that once you start to mix morality and math, you usually end up going off the rails. I think the main problem is the assumption that you can measure things like "utility" and "happiness" precisely, and get reasonable results by multiplying large rewards by small probabilities, or summing over vast numbers of hypothetical people. The error bars get too large, too quickly, for that sort of calculation to be viable.

That being said, if you are going to do math, do it properly. In reinforcement learning, if you attempt to sum over all future rewards from now until the end of time, you get an infinite number. The solution is to apply a time discount gamma, where 0.0 < gamma < 1.0, to any future rewards.

R(s_t) = r_t + gamma * E[ R(s_{t+1}) ]

Or in English, the total reward at time "t" is equal to the immediate benefit at time "t", plus the expected total reward at time "t+1", times gamma. Thus, any benefits that might occur at time t+10 will be discounted by gamma^10. This says that we should care about the future, but hypothetical future rewards are worth exponentially less than present rewards, depending on how far in the future you are looking. So long as future benefits don't grow exponentially faster than the decay rate of gamma, the math stays finite.

Note also that we are talking about future rewards "in expectation", which means dealing with uncertainty. Since the future is hard to predict, any future rewards are further discounted by the probability with which they might happen.

The argument over "short-term" vs "long-term" thinking is just an argument over what value to give gamma.

Expand full comment

Can't we just agree that any analysis that relies on collapsing the complex entirety of human experience into a single number is not even wrong?

Expand full comment


Agreed. As evidenced by the later neglecting of exploring real conflict between longtermism and general utilitarian ethics.

>So it would appear we have moral obligations to people who have not yet been born, and to people in the far future who might be millennia away.

There is so much more work needed before this armchair jump to this statement than the thought experiment provides.

>Stalin said that one death was a tragedy but a million was a statistic, but he was joking.

Was he? I don't think that is clear, from his behavior. I also don't think it is clear he was "wrong" about it. Ethics some might argue (I would argue) is context/perspective dependent. What is the right action for Bob walking down the street is not necessarily the right "action" for the US government.


The whole coal conversation is silly. Industrialization is not remotely that path dependent. Might take quite a bit longer without coal, no way that stops anything. Seems like a very bad misreading of history. Industrialization was incredibly rapid. The world seeing more change in 50 years than it had in millennia. If that is instead 500 years because of no coal, what difference does it make? In fact the transition might be smoother and less fraught.

>If only dictators have AI, maybe they can use it to create perfect surveillance states that will never be overthrown.

What is so bad about dictators? Especially ones with AI? When talking about issues this large scale, the exact distribution of political power is the least of our problems.

>Octopus farming

I agree this sounds bad.

>Bertrand Russell was a witch.

Indeed, he is amazing.


And here would be the first of my two main complaints/responses. This "suppose" is doing a lot of the work here. In reality we discount ethical obligations with spatiotemporal distance from ourselves pretty heavily. One big reason for this is epistemology, it just generally isn't as possible to know and understand the outcomes of your actions when you get much beyond your own senses.

You see this with how difficult effective development aid is, and how bad people are at predicting when fusion will happen, and how their behavior impacts the climate, or the political system. All sorts of areas. Because of this epistemic poverty, we discount "potential people", quite heavily, and that makes perfect sense because we mostly aren't in a good position to know what is good for them especially as you get farther from today.

The longtermist tries to construct some ethical dilemma where they say "surely the child running down the path 10 days from now matters no more than the one running down it 10 years from now". And then once you grant that they jump to the seagulls. But the answer is to just impale yourself on that horn of the dilemma, embrace it.

No the child 10 years form now is not as important. Someone else might clean up the glass, a flood might bury it, the trail might become disused. Et cetera, et cetera.

We don't have the same epistemic (and hence moral/ethical) standing towards the child 10 years from now, the situations ARE NOT the same.

The funny thing is overall I expect I am generally somewhat of a longtermist myself. I think one of the main focuses of humanity, should be trying to get itself as extinction proof as possible as soon as possible. Which means perhaps ratcheting down on the optimum economic growth/human flourishing slightly, and up on the interstellar colonization and self-sufficiency slider slightly.

But I certainly don't think we should do that on behalf of nebulous future people, but instead based on the inherent value of our thought//culture/civilization and accumulated knowledge. I don't remotely share the intuition that if I know someone is going to have a great life I owe it to them to make it possible.

>did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself?

Anyone who really believes this is far far down an ethical dark alley and needs to find their way back to sanity.

Expand full comment

I never understand why so many people care specifically about the survival of humanity. Isn't it enough that many different species survive? Anyway, our distant descendants won't be humans.

Expand full comment

I think you can head off the Repugnant Conclusion fairly easily by deciding that a larger population is not, in itself, a positive.

Expand full comment

All these thought experiments seem to contain the hidden assumption that the Copenhagen interpretation of quantum mechanics is the correct one. That we live in a universe with a single future. If instead the Many Worlds interpretation of quantum mechanics is true, you don't really have to worry about silly things like humanity going extinct - that would be practically impossible.

You also wouldn't have to stress over whether we should try to have a world with 50 billion happy people or a world with 500 billion slightly less happy people. Many worlds already guarantees countless future worlds with the whole range of population size and disposition. There will be far more distinct individuals across the branches of the wave function than could ever fit in the Virgo Super Cluster of a singular universe, and that's guaranteed no matter what we do today since there is always another branch of the wave function where we do something different.

If you believe the many worlds interpretation of quantum mechanics is true AND that quantum immortality follows from it.. Well that opens up all kinds of fun possibilities!

Expand full comment

I agree that, while I find long-termism very compelling when reasoning about it in the abstract, I must admit that my much stronger personal motivation for trying to get humanity safely through the most important century is my concern for my loved ones and myself, followed by my concern for my imagined future children, followed by my concern for all the strangers alive today or who will be alive at the time of possible extinction in 5-50 years. People who don't yet exist failing to ever exist matters, it just gets like 5% of my total, despite the numbers being huge. I dunno. I think maybe I have decreasing valuation of numbers of people. Like, it matters more to me that somebody is alive vs nobody, than lots of theoretical people vs a few theoretical people. Questions about theoretical moral value are complex, and I don't feel that this has answered them to my satisfaction. I'm not about to let that stop me from trying my hardest to keep humanity from going extinct though!

Expand full comment

>the joy that the child brings the parents more than compensates for the harm against abstract utility

On average, children decrease parental happiness, so this isn't particularly exculpatory.

Expand full comment
Aug 23, 2022·edited Aug 23, 2022

-> I realize this is “anti-intellectual” and “defeating the entire point of philosophy”

I think this kind of book is borderline pseudoscience. Philosophy discovers ideas, science discovers truth. And while MacAskill wants to compel you to believe something is true, in fact he is only doing philosophy.

The real idea of science is not "using our big brains to reason out the truth" or "being rational", it is, as Feynman once said, that the test of all knowledge is experiment.

We do not believe the odd things special relativity tells us about the time simply because there is a chain of logic and we believe anything logic tells us. We believe it because that chain of logic leads to testable, falsifiable conclusions that have been verified by experiment.

Mathematics alone is not science because there is nothing to test. Only when you try to apply it (in a field like physics) do you get testable conclusions. Logic does not derive truth, it simply tells us what conclusions are consistent with a given set of axioms. For example, hyperbolic geometry yields different conclusions than Euclidian geometry. Neither is "right" or "wrong" or "true" or "false": it doesn't even make sense to talk about something being "true" until you can to test against reality.

When MacAskill derives his Repugnant Conclusions and decides that they are True, what is the experiment by which we test that truth? I don't think there is one.

One can argue that we should still believe the conclusion because we believe the axioms, but what is the experiment that tested or derived our axioms? Our intuition? But if our intuition is axiomatic, a conclusion that disagrees with our intuition cannot be correct. The "proof" of such a conclusion may have demonstrated that our intuition is not logically consistent, but that does not help us decide what is true or which of the two intuitions (axiom or conclusion) we should discard.

To the extent that MacAskill's arguments are like mathematics, they are interesting and worth thinking about. But to the extent that they are not like science, we should treat the conclusions derived in the same way we would treat the conclusions of hyperbolic geometry. Not true or false, just interesting.

And I think MacAskill knows this is the case. After all, after a quick google it does not look like he's fathering as many children as he possibly can.

Expand full comment

To tackle the core example here: we don’t owe anything to the future child. We owe things only to those that exist. And future children, like future starvation (Malthus) or future fusion (still waiting) aren’t real until the moment they are born/discovered. Apologies if I missed it (although LRG’s comment touches on it). But the doctrine of Presentism seems to be missing from all these discussions. We are all engaging in a type of moral induction. But induction is a deeply flawed method of knowing the truth. Yes, it might be likely that humanity survives next year. But it might not. We can certainly make bets that certain actions taken now (which do affect presently existing moral agents) are worthwhile. But not because we “owe” anything to future generations. But besides we are betting on our continuance and willing to spend some present value for possible future gain. But all of that calculus is present. And to borrow from David Deutsch our inductive reasoning about the future is, at heart, prophecy, not prediction. Sure there may be 500B humans one day. Or AI wipes us all out next Tuesday. The end. What do we owe to those 500B? Nothing. Clearly. Because they don’t exist, and may never. So the real debate is about our inductive confidence. Should I be concerned about the child stepping on glass in 10,000 years? Our inductive reasoning falls utterly apart at that level. So no. Should I be concerned about something that’s reasonably foreseeable in the near term. Yes. But it’s frankly a bet. That it will be beneficial to those who exist at that near future time. Not an obligation. But a moral insurance plan. And there’s only so much insurance one should rationally carry for events that may never occur.

Expand full comment

>This isn’t about an 0.000001% chance of affecting 50 quadrillion people. It’s more like a 1% chance of affecting them.

Bullshit. In order to successfully affect 50 quadrillion people, it's not enough to do something that has some kind of effect on the distant future -- it would have to be some act that uniformly improves the lives of every single person on future Earth in a way that can be accurately predicted 500 million years before it happens. That's not just improbable -- that's insane.

Expand full comment

Fun review. Much of the logic alluded to seems mushy to me.

Example. Regarding having a child with or without a medical condition, these are two decisions conflated into one. "But we already agreed that having the child with the mild medical condition is morally neutral. So it seems that having the healthy child must be morally good, better than not having a child at all." Does not follow.

Another way to look at it is that once it is decided to have a child, and that this decision in and of itself may be morally neutral, then the next decision fork is whether it is known the child will have a morally unacceptable health disorder, a morally neutral health disorder or no health disorders whose morality remains to be determined. It is a fallacy that because decision B lying between two other decisions A and C along a spectrum of characteristic H is morally neutral, morality being characteristic M, any decisions on either side of the H spectrum must therefore correspond to the M spectrum. It is possible that bearing children with health conditions less "severe" for the sake of argument than male pattern baldness might be equally as morally neutral. The M spectrum may only go from bad to neutral in this case. There is no law that options must have a positive outcome option.

Then there is the mugger and the kittens. The decision-maker is loosely represented as the observer. Better outcomes for whom? With the mugger, it is better from the decision-making target's perspective to retain the wallet. From the mugger's perspective, it is better for the target to relinquish it. Regarding drowning kittens, that is undesirable from the kittens' perspective but logically, it is a neutral outcome for the drowner. Do not confuse this observation with sociopathy, please; it is an argument about the logic!

There is so much confusion of categories in these poorly defined arguments that I find them unpersuasive in general.

Expand full comment

I have spent too many hours thinking about questions like the repugnant conclusion, and whether it's better to maximize average or total happiness. I'm still hopelessly confused. It's easy to dismiss all this as pointless philosophizing, but I think if we ever get to a point where we can create large numbers of artificial sentient beings, these questions will have huge moral implications.

I suspect that one reason for present day confusions around the question is a lack of a mechanistic understanding of how sentience and qualia work, and so our frameworks for thinking about these questions could be off.

For example one assumption that seems to be baked in to these questions is that there is a discrete number of distinct humans/brains/entities that do the experiencing. You could imagine a world where the rate of information transfer between these entities is so much higher that they aren't really distinct from one another anymore. In that world differences in happiness between these entities might be kind of like differences in happiness of different brain regions.

I really hope we'll develop better frameworks for thinking about these questions, and I think that by creating and studying artificial sentient systems that can report on their experiences we should be able to do so.

Expand full comment

(First comment - nothing like a math error to motivate a forum post.)

Trying to follow the critique of the Repugnant Conclusion here:

> "World C (10 billion people with happiness 95). You will not be surprised to hear we can repeat the process to go to 20 billion people with happiness 90, 40 billion with 85, and so on, all the way until we reach (let’s say) a trillion people with happiness 0.01. Remember, on our scale, 0 was completely neutral, neither enjoying nor hating life, not caring whether they live or die. So we have gone from a world of 10 billion extremely happy people to a trillion near-suicidal people, and every step seems logically valid and morally correct."

10 billion people with happiness 95 = 950 billion utilons.

1 trillion people with happiness 0.01 = 10 billion utilons.

Shouldn't you need at least ~100 trillion people (not 1 trillion) with happiness 0.01 before the moral calculus would favor choosing the greater number of less-happy people?

Expand full comment

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average.

I'll supply the obligatory mugging: https://forum.effectivealtruism.org/posts/DCZhan8phEMRHuewk/person-affecting-intuitions-can-often-be-money-pumped

Expand full comment

The Repugnant Conclusion: "Shut Up, Be Fruitful, and Multiply"

Expand full comment

"Nor was there much abolitionist thinking in the New World before 1700. As far as anyone can tell, the first abolitionist was Benjamin Lay (1682 - 1759), a hunchbacked Quaker dwarf who lived in a cave. He convinced some of his fellow Quakers, the Quakers convinced some other Americans and British, and the British convinced the world."

We should celebrate all of the work the Quakers did to eradicate most of the slavery in the world. But they were not the first abolitionists. The abolitionist movement of the High Middle Ages in Northwestern Europe successfully ended the Viking slave/thrall trade and laid the foundation for the Quakers to build on. There is less evidence for this time period, but we do have enough to get some idea of the movement.

The first evidence we have comes from the Council of Koblenz, in 922, in what is now Germany, who unanimously agreed that selling a Christian into slavery was equivalent to homicide. It doesn't look like this had any legal consequences.

In England, about 10% of the population was enslaved in 1086, when the census now known as the Doomsday Book was conducted. Anselm of Canterbury (famous for the ontological argument) convened the Council of London in 1102, which outlawed "that nefarious business by which they were accustomed hitherto to sell men like brute animals". Slavery in England seems to have died out over the next several decades. Slavery was still prominent in Ireland, and Dublin had been the main hub for the Viking slave trade. This Irish slave trade was one of the reasons listed for the Anglo-Norman invasion of Ireland in 1169. Abolition was declared at the Council of Armagh in 1171.

In Norway, we don't know the exact date when slavery was outlawed. The legal code issued by Magnus IV in 1274 discusses former slaves, but not current slaves, which indicates that slavery had been outlawed within the previous generation. Slavery in Sweden was ended by Magnus IV in 1335.

Louis X declared that "France signifies freedom" in 1315, and that any slave who set foot on French soil was automatically freed. Philip V abolished serfdom as well in 1318.

You frequently hear the argument that serfdom was ended in Europe by the Black Death. The decreased population allowed peasants more bargaining power to demand their freedom. This really doesn't match up with the history in France (not an insignificant country): serfdom ended decades before the Black Death of 1347. The abolition occurred during a Great Famine, when overpopulation was a greater concern. I think that this is a point in favor of the more general argument that the abolition of slavery was contingent and the result of moral persuasion, not the inevitable result of economic forces.

In the Mediterranean, Christian states had laws against selling Christians into slavery, dating at least as far back as the Pactum Lotharii of 840 between Venice and the Carolingian Empire. Slaves captured from Muslim states or pagans from Eastern Europe were commonly used by Chrisitians. Similarly, Muslim states in the Mediterranean banned enslaving Muslims, but frequently enslaved Christians and pagans. Since Muslims and Christian were continually raiding each other or engaged in larger wars, there was no shortage of slaves in the Mediterranean.

During the Early Modern Era, the religion-based criteria for slavery evolved into the more familiar race-based criteria for slavery. The countries of northwest Europe participated in the slave trade (a lot) and used slavery extensively in their colonies.

The Quaker-led abolitionist movement of the 1700s were able to build on the earlier abolitionist movement. In Somerset v Stewart in 1772, the slave Somerset who had been bought in the colonies and brought to England sued for his freedom. The judge found that there was no precedent for slavery in English common law, or in any act of parliament. This was the first major victory of the modern abolitionist movement, and it relied on the tradition created by the medieval abolitionists.

Expand full comment

"For example, it would be much easier to reinvent agriculture the second time around, because we would have seeds of our current highly-optimized crops"

This may not be true. High yield wheat requires the application of synthetic hormones during its growth cycle in order to deactivate the genes that make the stem grow, otherwise the stem gets too long and bendy to support the head, and the plant falls over and rots.

Modern agriculture is highly technical and requires significant industrial infrastructure to support it.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I may be misunderstanding you, but I doubt that your proposed view on population ethics does what you want. (Sorry if this was already discussed.) You say:

> Just don’t create new people! I agree it’s slightly awkward to have to say creating new happy people isn’t morally praiseworthy, but it’s only a minor deviation from my intuitions, and accepting any of these muggings is much worse.

> If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average. [...] This series of commitments feels basically right to me and I think it prevents muggings.

I'm not sure whether you mean to suggest that creating new (happy) people adds zero value, or that it does add some positive value provided the new people increase average happiness.

In either case, the resulting view does prevent the kind of mugging you get based on the Repugnant Conclusions. But many other muggings remain. For instance:

*If adding new people is at best neutral:*

This commits you to the view that any population that faces even a tiny risk of creating net unhappy people (or worse, future generations with slightly below-average welfare!) in the future should pursue voluntary extinction because there's an expected harm that cannot be offset by anything of positive value. Imagine an amazing utopia, but that for some reason the only options available to its inhabitants are voluntary extinction or a very long future utopia for its descendants that is still great (imagine lives *much* better than any life today) but slightly less awesome than the counterfactual average. Your proposed view implies that it'd be better if this utopian world was cut short, which seems absurd.

If the amount of expected future suffering (or reduction in average wellbeing) is small enough, you may be able to get around this by appealing to contingent claims like "but procreating might make people happy, and maybe this increase in happiness in practice would outweigh the expected reduction in average wellbeing that is at stake". But this response neither works to defeat the previous thought experiment, nor does it (arguably) work in the actual world. In the thought experiment, we can simply stipulate that refusing voluntary extinction does not increase the happiness of the current generation. Put differently, this response can't buy you more than being able to say "future generations by themselves can only make the world worse, but bringing them into existence can sometimes be justified by the current generation's self-interest". My guess is that this is not really the intuition you started out with, and in any case it becomes increasingly tenuous once we consider more extreme scenarios. Imagine a hypothetical Adam and Eve with a crystal ball who experience extreme levels of bliss. They know that if they procreate, a utopian future lasting 3^^^3 years will ensue, in which at each time 3^^^3 people will experience the same levels of bliss minus one barely perceptible pin brick. Suppose Adam and Eve were interested in what they ought to do, morally speaking (rather than just in what's best for them), and they turn to you for advice. Would you really tell them "easy! the key question for you to answer is whether the reduction in future average wellbeing represented by these pin bricks is outweighed by a potential happiness boost you'd get from having sex."?

OK, now maybe you somehow generally object to reasoning involving contrived thought experiments. But arguably we get the implication favoring voluntary extinction in the real world as well. After all, there is a non-tiny risk of creating astronomical amounts of future suffering, e.g. if the transition to a world with transformative AI goes wrong and results in lots of suffering machine workers or whatever. (For those who are skeptical about weird AI futures, consider that even a 'business as usual' world involves some people who experience more suffering than happiness. We don't even need to get into things happening to nonhuman animals ...) This is a significant amount of expected disvalue that, from an impartial perspective, is not plausibly outweighed by the interests of current people to have children. You can of course still maintain that "I don’t want x-risks to kill me and everyone I know", but this statement then has morphed from "look, honestly my main motivation to prevent this horrible thing from happening that I don't want my near and dear to die" to "I don't want my near and dear to die even at a huge cost to the world". This seems hard to square with generally cheering for EA, and seems viable only as an admission of prioritizing one's self-interest over moral considerations (and, to be clear, may be relatable and even exculpable as such) but hardly as an articulation of a moral view.

Does it help if you modify the view to saying it's only bad to add people whose wellbeing is below some fixed & noncontingent "critical threshold" (rather than average wellbeing)? Or only if their wellbeing would be below zero? No. Some of the above arguments still apply, and in others you still get nearly as counterintuitive results by replacing future populations that would slightly reduce the wellbeing average with (a risk of) future populations with slightly below-zero lives. Any view that entails that future generations can make the world only worse, but never better, will have extremely counterintuitive implications of the kind discussed above.

*If adding new people _is_ positive when they increase average wellbeing:* OK, now you're in the game of at least sometimes being able to say "yes, the future is worth saving, and not just for selfish reasons". But now your view is arguably just a worse version of the critical level view, which, as you describe, Will does discuss in the book. For instance, your view implies that it's sometimes better to create an enormous amount of people experiencing extremely severe torture than to create an even larger number of amazingly happy people who unfortunately just so happen to be slightly less happy than the previous even more unfathomably high average. Faced with such implications, I think you should just throw out the role that average wellbeing plays in your view, and adopt a critical level view instead. You still have to deal with the same sort of case, with the previous wellbeing average replaced by the critical level (now we are just dealing with the 'Sadistic Conclusion' discussed in the book), and I would still consider it a fatal objection to your theory but it is at least slightly less absurd than the average variant.


Expand full comment

What’s the underlying argument re: why a hypothetical reasonable man should logically care about anyone aside from (i) himself and, possibly, (ii) his own descendants?

That seems to be taken for granted, but not postulating it could lead one to prefer a catastrophe that wipes out 99% of humanity to a car wreck that wipes out him and his children.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

“I realize this is ‘anti-intellectual’ and ‘defeating the entire point of philosophy’.” This is why Robert Nozick said that the perfect argument is one where, if you admit the truth of the premises and agree that they lead to the conclusion, but still deny the conclusion, then you die. Otherwise people are free to thumb their noses at reason and walk away. Nozick lamented the lack of perfect arguments.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I always feel like these kinds of moral philosophy arguments that arrive at weird conclusions are some kind of category error. The Repugnant Conclusion to me feels like pointing out that if you divide by zero you can make math do anything. The correct response is to point out that "dividing by zero" means nothing in a real world context (I can't divide my three apples among zero friends and end up with infinite apples), and therefore the funky math results from it are meaningless.

In the same way, trying to redistribute happiness across a population isn't actually a thing you can do. I can't give you some of my happiness and take some of your sadness. Since you can't actually do the things the thought experiment proposes in the real world, it has no applicability to the real world.

Expand full comment

It's always interesting how our moral intuitions differ. The First time I heard about the repugnant conclusion, I did not understand why people found it repugnant. It matched my intuition perfectly, even if step B was missing. I've read enough arguments from people that find it repugnant to understand their view point, but both intuition and logical argument make me think it's not repugnant.

Expand full comment

Recently, in comments on EA, you said "Although I am not a perfect doctrinaire utilitarian, I'm pretty close and I feel like I have reached a point where I'm no longer interested in discussion about how even the most basic intuitions of utilitarianism are completely wrong"


"sorry if I'm addressing a straw man, but if you asked me "would you rather cure poverty for one million people, or for one million and one people", I am going to say the one million and one people, and I feel like this is true even as numbers get very very high. Although satisficing consequentialism is a useful hack for avoiding some infinity paradoxes, it doesn't really fit how I actually think about ethics"

Here, you say,

"Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice."

All I can say is, it's nice to have you on team Satisficing Consequentialism! At least, however much you're over here. I feel like I should thank William MacAskill.

Expand full comment

A point about the repugnant conclusion: the transition from step B to step C, where we take a heterogeneous population with an average happiness level of 90%, and convert it to a homogeneous population with an average happiness level of 95% - this step is the crux of my problem with it as a thought experiment. This step strikes me as intuitively impossible. Any such step will be governed by some kind of relevant analogy to the second law of thermodynamics - whatever actual process comprises the step from B to C cannot end with a higher average happiness than it started with, unless you add happiness to the system somehow.

But if you have a happiness making machine, then the repugnant conclusion is moot. It becomes merely a question of how best to distribute the happiness machine's output, which makes it just a garden variety utilitarian quandary.

Expand full comment


"cosigned by twenty-nine philosophers"

I was taught that this was an appeal you were supposed to reject in this very space. Of all the fucking places to make a slick appeal to authority, philosophy? Fuck. That.

Expand full comment

> If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity.

This reminds me of the distinction between parametric and non-parametric approaches towards statistical analysis.

Within parametric statistics you specify a function at the start, and then go on and do lots of interesting statistics conditional on your functional choices. The analysis can become incredibly complex, and people will derive bounds for their errors, uncertainty, and lots of other theoretical properties of their model -- but it all comes back to that initial choice to pick some mathematical function to describe their world.

And that's what I think these philosophers are doing -- although perhaps more implicitly. A lot of their word games are isomorphic to some linear utility with infinite discounting. Okay sure. But why is it a linear function? Why not piece-wise? Why doesn't it decay faster? If they set up some linear utility structure, then extrapolate towards situations we've never seen, you can get arguments for "it's better for 20 billion slightly miserable people, than 5 billion happy people"

The non-parametric approach towards statistics sacrifices the power of defining a function and examining how it works across all cases, often through extrapolation, and instead lets the data speak for itself. It's the more structurally empirical (or Humean) way of approaching this sort of thing. It's what you're doing when you say "just walk out." We've never seen what it looks like to actually, empirically, make a choice to discount infinitely and follow it through, nor have we compared it to the counter-factual (to the extent that this is even a realistic thing to measure). What we have seen is on earth, where you could sort of squint and try to compare Switzerland to certain impoverished regions in the global south, and can use that as a proxy for fewer happy people, vs. more unhappy people. Reasonable people can make that comparison and come to different conclusions, but I find the messy empirical proxy is a lot more informative than this made-up infinite extrapolation.

One thing we can also observe is in our actual reality, there aren't 1.2 billion people with happiness=0.01 in Africa, and 8.6 million people with happiness=100 in Switzerland. Both have pretty dramatic distributions. In these contrived examples, we ignore the fact that there will inevitably be uncertainty and distributions of outcomes. The implicit functions that are proposed don't meaningfully contend with the messiness (uncertainty) of our actual reality, when extrapolating to the far future.

In this sense taking a non-parametric approach is sacrificing the cleanliness and purity of thought that these philosophers take, and instead taking a probabilistic, empirically founded approach, that is much worse at extrapolation or making crisp statements, but far better at integrating itself with the empirical reality we observe.

Expand full comment

One important thing I never see mentioned in discussions of utilitarianism and it's assorted dilemmas. Time. The utility function evaluated in a specific point of time, which most people seem to be talking about, is actually irrelevant. What we actually should care about is its integral over some period of time we care about.

If I'm egoistical, I'll just care for my total personal well-being, over my whole lifespan. Let's say that without life extension, this is just a finite timespan and rather easy to reason about.

But if I'm altruistical and value humanity, I'll care for the total well-being of all people over... all of the future time. Which is an infinite timespan! And so it does not actually matter, if there are 5 or 50 billion happy people right now, because the integral of the utility function will be infinite anyway. Except for:

1) If humans become extinct, our integral becomes finite. This leads to the conclusion that x-risk is an exteremely important problem to tackle.

2) If most people's lives are net negative, the integral will be negative infinity. Which is literally, hell. So, we should take actions to avoid hellish scenarios, which is basically ensuring people have a noticeably positive quality of life on average.

3) The heat death of the universe will come some time, and so these infinities are not actually infinite. Which suggests we should maximise our entropy usage by being energy-efficient, and that probably means building Dyson's spheres for all stars quickly, etc. But! This is much less of a priority than 1 and 2, we should not rush this at all.

I don't see any dilemmas with this approach. And it seems to match common sense perfectly. Can anyone find flaws in reasoning here?

Expand full comment

I think the Repugnant Conclusion is wrong, I'll explain why. If you think there is a flaw in the reasoning below please get in touch.

The moral question 'what is good/bad' requires a subject/subjects capable of experiencing goodness/badness. The existence of the subject must precede the question about what is good/bad for it.

So if you are asked 'which is better, a world with Population A or a World with Population B' you should ask 'for who?' The question doesn't make sense otherwise. In a world without experiencing beings, there is no meaning to the concepts of 'good/bad'.

Eg, a world with 1000 happy Population As, is better - for Population A. A world with 1000 happy Population Bs is better - for population B.

This makes the question of who does/doesn't exist morally important. If Population A exist, and Population B doesn't, we should choose the Population A world.

We shouldn't care about beings that don't exist. Because - they don't exist. They are literally nothing. We should only care morally about things that exist. This means we are morally free to choose not to bring beings into the world if we so wish. Eg, don't have the baby/create the orgasmic AI, no problem.

Important note -I'm not saying that only things that exist now are important and that future beings have no moral importance. No.

If we care about existence (which we do), that means we also care about future existence. Future beings that actually will exist are of moral importance. For example, if you do decide to have a baby and you can choose between Baby A, happy baby, and Baby B, sad baby, you should choose Baby A, happy baby.

This is because, from the perspective of a budding creator, neither Baby A or Baby B exist (yet). So we don't need to prioritise the interests of either A or B. But the creator knows a baby will exist. And happy babies are good. So they are under obligation to choose Baby A. This is better, from the perspective of the baby who will exist.

I think this idea of thinking about 'future beings who will exist' in potentiae in order to prioritise between them is the problematic bit. But I think it is less problematic than the reasoning used in the Repugnant Conclusion which says 'we should care morally about all beings equally regardless of whether they exist or not.'

I believe my reasoning here is similar/adjacent to something called the 'Person Affecting View' in population ethics, though I'm not 100% sure.

I think one reason why this flaw in the Repugnant Conclusion often goes unnoticed is because we forget that ethical reasoning usually just takes for granted the existence of conscious subjects. So we get confused by an ethical argument that doesn't take that for granted but neglects to mention it.

If this logic is right EA should take note, because currently most EAs seem to accept the Repugnant Conclusion and that's bad PR, because most people really don't like it (as is evident from most of the comments here). And also it's wrong, maybe

Expand full comment

The suit, the muddy pond, and the drowning kid: you save the kid you see in front of you because you know you can. "Saving" an unknown kid on another continent is a much iffier thing. You sell your suit and give the money to some people who *assure* you it'll save a kid. Mmhmm. Only people stupid enough to call themselves effective altruists would give that person money.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Reading “What We Owe The Future” long-termist book, the population ethics section.

There is a series of premises that are assumed.

- That preferences are or should be transitive.

- That a person’s life’s wellbeing is something you could measure on a unidimensional scale.

- That if you wanted to measure the value of a society, the most sensible way to do it would be to look at each individual’s wellbeing measurement and add them up or average them.

To the transitivity point, clearly in real life even simple preferences might not be transitive. This is because people could have at least 2 qualities/considerations/dimensions that they rate decisions on, and apply different qualities when making the decision between A and B, vs. B and C. Thus, you can’t conclude that if A>B, and B>C, that A must > C. Here is an example:

Decision Quality

1 2

A 50 100

B N/A 50

C 100 25

If your decision rule is:

1. Choose based on quality 1, if applicable

2. Then, choose based on quality 2.

Then, your preferences would be intransitive and perfectly logical at the same time (that is, you would prefer A>B, B>C and C>A).

I think things in real life are certainly more complex than this simple example, in that I think people do care about 2+ qualities at the very least when it comes to making most (substantive) decisions.

I think the fact that people have to do one thing in a given moment provides the illusion that we must be computing an overall score of the Benefit or Utility of our actions, and then picking the one with the highest utility, but other alternatives are equally or more plausible:

Alternative 1: We have complex decision rules which take into account different properties in different contexts.

Alternative 2: We have a kind of probabilistic wave function-like set of preferences, and in the moment we have to make a decision, we have to collapse the wave function so to speak, by picking one thing. Then, after the fact, if someone asks you why you did what you did, you come up with a post hoc rationalization about why that was the overall best decision (if you do feel that that was the case).

Either way, I think people’s preferences are much more complex than just measuring the Value or Utility or Benefit of things and picking the best one (even the acknowledgement that this measurement of Utility is error-prone and uncertain is not enough. I think the reality has more complexity than this).


What about this measurement of wellbeing, at the individual level?

They talk about these formulations:

1. Preference Satisfaction

2. Hedonism

3. Objective List

I think my problem with all of these is that they are at the individual level. When I think about “rating” a person’s life, and what could possibly be relevant, I would think about things like:

- the narrative trajectory, cohesion, etc. of their life. “Is it a good story?”

- the network of relations that they have with people closer and more distant to them, and with their human built environment and natural environment, etc.

- their character, both from a moral standpoint and just a value-neutral personality standpoint.

among other things

If you choose to measure something you will get a measurement of it. So, of course you can ask someone a single question, “Please rate the quality of your life so far (0-10):” etc. But just because you can ask the question and get an answer doesn’t mean that this concept reflects what we think is important about the topic.

To use psychometric terminology, just because we can ask the question doesn’t mean it’s valid or reliable. Perhaps not valid: at face validity level, concurrent validity, predictive validity etc. Perhaps not reliable: if you were to have an external rater, or the same person rate multiple times, etc., how reliable would it be?

(So what do I think would make more sense than this? To ask people more questions and rate more qualities, e.g. the quality of their work environment, their home environment, relationships with their families, relationships with friends, hobbies, beliefs about society, beliefs about the future etc.)


What about the concept of rating societies against one another? And using the sum or average of wellbeing of the individuals making it up to evaluate them?

Of course, I think a single number would be way too simple to evaluate the quality of an individual life (as above). But even if you could do this, I don’t think taking the sum or average would be a good measure of the society. Why is that?

I think when you are measuring the quality of a society, it would make sense to look at characteristics that are at the same level of analysis as a society.

By analogy, if you wanted to evaluate the quality of a chair, you wouldn’t first measure the quality of the atoms that make up the chair, and then average the quality of the atoms in order to get the quality of the chair. A chair has many emergent properties, and those emergent properties are what make it a chair, and also make it a “good” chair. Qualities such as: it being arranged in a way that a human might be able to sit in it; the comfort of sitting it in; the typicality of its appearance; the stylishness of its appearance; its ability to be reused, durability etc.; the quality of the materials; the environment impact of its construction; the treatment of the labour used in making the chair, etc. etc. None of these qualities or any of the other qualities you might care about are contained in the individual atoms that make up the chair, even though the chair is nothing except atoms.

In the same way, a society has many emergent properties (obviously, it has emergent qualities much more complicated than those of chair). Those emergent properties are what make it a society rather than a Matrix-like collection of atomic individuals who have no effect on one another. The emergent qualities of societies are exactly what we would care about if we were trying to measure the qualities of a society--the qualities that come out of the complexities of the actual realities of people’s lives in interaction with one another, with their human and natural environments, etc.


So when I put that all together, what do I think about their population ethics arguments:

- I don’t think measuring wellbeing on an individual level on a unidimensional scale makes sense (both because of the “individual level” and “unidimensional” parts).

- Even if I conceded that you could do something like that (measuring individual wellbeing on a unidimensional scale), I don’t think evaluating society on the basis of the sum or average of this measurement would make sense (because of emergent properties being exactly the ones we care about).

- Even if I conceded that you could make some sort of Society Scores reflecting the overall qualities of a Society on the multiple dimensions that matter (which I suppose could be possible, but you wouldn’t do it by averaging or summing individual wellbeing), I don’t think preferences in general (even very small preferences, but especially large preferences like those of an entire society) are necessarily transitive. That is, I think a person can be perfectly logical in preferring A to B, B to C and C to A, as long as they have a slightly more complex decision rule than “rate based on a single quality then pick the one with the most of that quality”. I think it is quite realistic that people would have more complex decision rules when it comes to their preferences, when I think about how complex people are.

Expand full comment

"When slavery was abolished, the shlef price of sugar increased by about 50 percent, costing the British public £21 million over seven years - about 5% of British expenditure at the time" This sounds off to me: for a 50 %increase in the price of a product to cause 5% total expenditure, then the initial expenditure on that good would have to be around 10% of total expenditure. And did amyone really spend 10% of their total consumption in sugar? that is one hell of a sweet tooth....

Expand full comment

>(in case you think this is irrelevant to the real world, I sometimes think about this during debates about immigration. Economists make a strong argument that if you let more people into the country, it will make them better off at no cost to you. But once the people are in the country, you have to change the national culture away from your culture/preferences towards their culture/preferences, or else you are an evil racist.)

We have a real-world, large scale example of this in post-apartheid South Africa. After its transition from white minority rule to democracy, the country arguably got worse for that white minority as resources were redistributed. But it certainly got a lot better for everyone else. And yeah, you were probably an evil racist if you opposed that.

In The Four Loves, C.S. Lewis writes on patriotism,

"With this love for the place there goes a love for the way of life; for beer and tea and open fires, trains with compartments in them and an unarmed police force and all the rest of it; for the local dialect and (a shade less) for our native language. As Chesterton says, a man's reasons for not wanting his country to be ruled by foreigners are very like his reasons for not wanting his house to be burned down; because he "could not even begin" to enumerate all the things he would miss."

So I've never understood nationalism, but the above quote is perhaps the best steelman of it that I've seen. It does make me wonder if my experience growing up in vividly multicultural post-Apartheid South Africa might be part of the reason for that; I've never had a monocultural community with which to identify.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

My take on the repugnant conclusion (also certain trolley problem variants) is that for purposes of utility calculations, human lives are not commensurable.

So world B is indeed strictly better than world A, because we're comparing the new, happy humans to a nonexistence thereof. However, world B and world C cannot (necessarily) be meaningfully compared.

In mathematical terms, there isn't a well-defined order relation on the set of these possible worlds; at most there exists a partial order relation. So you can have some worlds that are obviously better than some other worlds (B > A), but with no guarantee that a given two worlds are commensurable (B ? C).

Expand full comment

Closely-related spoilers for the ending of UNSONG:

God is a utilitarian who accepts the Repugnant Conclusion, so he creates all possible universes where (the sum of all good in the universe) > (the sum of all evil in the universe). Unfortunately for most people in the setting of UNSONG, they live in one of those universes where good and evil are just about balanced. Also, in UNSONG, multiverse theory and the Repugnant Conclusion of utilitarianism are the solution to the Problem of Evil.

Expand full comment

Possible solutions to the Repugnant Conclusion:

If we are maximizing utility, then going from World A to World C is not as good as going from World A to a version of World C where all people have as much individual utility as those in World A. This helps, but it still ranks the regular World C above World A.

To make sure World C does not outrank World A, we would have to say that utility isn't additive. Perhaps true utility is the utility of the person with the lowest utility (ie, like in The Ones Who Walk Away From Omelas, or real-world concerns about income inequality). Or, perhaps utility is additive in small cases (ie, whether to add another child), but not in large cases (in the same way that, under special relativity, velocity is approximately additive at low speeds, but not at all additive when close to the speed of light).

Expand full comment

I always wonder if someone has thought that perhaps the "time value of money" concept (https://en.wikipedia.org/wiki/Time_value_of_money) could also apply to future lives? At some point, an almost infinite number of lives (but not quite infinite) could still be worth near zero if it's almost infinitely far into the future.

I must not be "sophisticated" enough to deeply care about the EA bandwagon, but if I were to plant a tree, I don't worry (like not worry AT ALL!) whether that tree will fall on someone 50 years from now when the wood starts to get rotten.

Expand full comment

Didn't expect this post to be mentioned by the Economist. Does this kind of thing happen often here?

"One crucial moment it charts is the shift in the movement’s centre of gravity from Mr Yudkowsky to Scott Alexander, who currently blogs under the name Astral Codex Ten."

"Also try:

For a whimsical but critical review of Mr MacAskill’s “What We Owe the Future”, see a recent blog post by Mr Alexander at Astral Codex Ten."


Expand full comment

I have two really big difficulties with the population / RC analysis here. 1) Why are we assuming that more people makes life worse for everyone? Maybe in some contrived scenario, but not in the world we live in. In our world, as population has increased rapidly over the last 200 years, life has gotten rapidly better for just about everyone. Why do we not assume that more children means more people that are also happier? In that case the moral logic is easy - have more children. My wife and I practice what we preach, as we have 8 children. We are more fulfilled for it, and the total utility of the world is increased.

2) The idea that very poor people would be better off never having existed is appalling and doesn’t match my experience. Do you really believe this? I have spent years among very poor people in slums in a poorer country, and years managing allocation of charitable contributions to needy people in my community in the US. While finances may be tight, and assistance can help, these are humans very much as capable of both joy and despair as anyone living in luxury. I don’t know how you can suggest that they or the world as a whole would somehow be better off without their existence.

Expand full comment

Donating money to a longtermist X-risk Charity sounds like the best way to make sure that the donation never actually helps any actual person.

Expand full comment

Does Arrow's Impossibility Theorem come up in the context of the Repugnant Conclusion?

The Repugnant Conclusion argument sneaks in a dictator (the philosopher) that insists on completeness (we can always compare two worlds) and transitivity (if world C is better than B, which is better than A, then C is better than A) while disregarding the preferences of even a vast majority of the population.

Arrow showed that this it's not possible to have completeness and transitivity in social choice without sacrificing a set of reasonable criteria including non-dictatorship and Pareto efficiency (if World A is better than B for everybody, then A is better than B).

I'm not familiar with population ethics – is there an axiomatic approach?

Expand full comment

The part about the billions people at 0.1 looks to me like philosophical sleigh-of-hand swindle. The issue is the aggregation function — the functions that take a lot of individual happinesses and computes a single value for comparison — and the assumption that happiness-suffering are linear and symmetrical.

But this assumption immediately leads to the conclusion that one person suffering hell is well worth a little more happiness for a lot of people, were we to find an evil god offering the deal. Which we usually do not consider valid.

It is hard to find a satisfactory aggregation function, but my guess is that for most people, it is closer to an infimum than a total or an average. And with that corrected aggregation function, the whole reasoning of part IV collapses.

Expand full comment

I have this provocative take: the Nazis may have saved us from the worst of climate change. How? Without World Ware II, no Manhattan project, nuclear power would have been developed later, and coal and oil would have taken a greater part.

Does it mean a rational person would have helped Hitler get into power? No! Because it is just a hypothetical. It could have gone the opposite way: maybe without the Manhattan project nuclear power would be less associated with bombs in our collective subconscious and we would have fewer naturolaters demanding we ban it.

My point is that counterfactual history is useless for ethical reasoning, fun as it is in fiction. History is chaotic. Little changes in facts can and will lead to completely different events. Sometimes, just a few centimeters can make a difference, but nobody can say if it would have led to Mars or to “Kennedy Camps”. Also, gametogenesis and fecundation are extremely sensitive to initial conditions.

In short any action today can lead to a better future soon but then to a much worse future later, or the opposite.

This is, I think, the flaw of longtermism and this ethical discussion: when weighing our actions now, their consequences have to be weighed by our ability to predict them, and that converges to 0 very very fast.

Expand full comment

> so your child doesn’t develop a medical, most people would correct the deficiency


Expand full comment

I may be being incredibly naive here, but isn't it likely that after a (near-)apocalypse that happens in the future, the survivors would be able to make use of physical and/or intellectual remains to bypass coal-powered technology entirely and rebuild using only renewable energy technology? The hard part is discovering and designing alternative ways of harnessing energy, and once that's done there's no need to relive the past in some kind of steampunk era, is there?

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

Counterfactual mugging, and this whole circus, depend in part on assuming metrics for things that are immeasurable, such as your degree of happiness, and even if we do the thought experiment of supposing a measure for an individual exists, are incommensurable, such as your degree of happiness versus my degreee of happiness--and worse again, applying simple real-number arithmetic to such measurements: not only metrics, but the most bone-headed ones; and in part on removing all context from every situation: manufacturing universals.

Drowning kittens is better (for you, for animals in your neighborhood, etc.) than petting them, _if they are rabid._ Etc. Universals universally do not exist. "Better or worse for whom? When? Materially?"

The fact that in his concrete examples (as reported), MacAskill elides important details and qualifications, or outright gets details wrong (the Industrial Revolution happened *before* steam became relevant, for instance: see Davis Kedrosky), gives me a prior that he is not worth listening to on the philosophy.

Sloppy is sloppy. Universally.

Expand full comment

When Longtermism comes up, very very often the criticism I hear is that the future people you are supposedly trying to help might not exist at all, and I think this point gets conflated with the idea of intrinsic moral discounting of the future. (I mean that it gets conflated by the public, not by Will and other EAs who understand the distinction already.)

Like I will say, "I think my great grandchildren have as much moral worth as I do, even though they don't currently exist", and the critic will respond "but they might not exist at all, even in the future, you need to discount their moral worth by the probability that they will". But what I meant in the first place is "presuming they exist, I think my great grandchildren...", i.e. there is no *additional* discounting coming from the fact that they live in another century and not Now.

Maybe this distinction just seems too simple to be worth marking for people already "in the know", but I think it leads to a lot of confusion about the basic premises of Longtermism when EAs communicate to the public.

Expand full comment

The book was reviewed in The Wall Street Journal today (27 Aug. 2022)

"‘What We Owe the Future’ Review: A Technocrat’s Tomorrow The gospel of ‘Effective Altruism’ is shaping decisions in Silicon Valley and beyond" By Barton Swaim on Aug. 26, 2022.


The reviewer did not like the book. Quotations from the review, which like all WSJ.com content is paywalled:

* * *

Skeptical readers, of whom I confess I am one, will find it mildly amusing that a 35-year-old lifelong campus-dweller believes he possesses sufficient knowledge and wisdom to pronounce on the continuance and advancement of Homo sapiens into the next million years. But Mr. MacAskill’s aim, he writes, “is to stimulate further work in this area, not to be definitive in any conclusions about what we should do.” I’m not sure what sort of work his arguments will stimulate, but I can say with a high degree of confidence that “What We Owe the Future” is a preposterous book.

* * *

But it’s Mr. MacAskill’s arguments themselves that dumbfound. Most prominent among their flaws is a consistent failure to anticipate obvious objections to his claims. One of the great threats to civilizational flourishing, in his view, is, of course, climate change. He is able to dismiss all objections to zero-emissions policies by ignoring questions of costs. Questions like this: Will vitiating the economies of Western nations in order to avoid consequences about which we can only speculate hinder our ability to find new ways to mitigate those consequences? And will the resultant economic decline also create social and economic pathologies we haven’t anticipated? Mr. MacAskill, who specializes in asking difficult, often unanswerable, questions about the future, shows little curiosity about the plain ones. One outcome he does foresee, meanwhile, is that China will abide by its pledge to reach zero carbon emissions by 2060—a judgment that, let’s say, doesn't enhance one’s confidence in Mr. MacAskill’s prophetic abilities.

* * *

Books like this very often mask the impracticality of their arguments by assigning agency to a disembodied “we.” Mr. MacAskill does this on nearly every page—and, come to think of it, on the title page: “What We Owe the Future.” “We” should increase technological progress by doing this. “We” can mitigate the risk of disease outbreak by doing that. Often “we” refers to the government, although it’s unclear if he means regulatory agencies or lawmaking bodies. At other times “we” seems to mean educated elites or humanity in general. This gives the book the feel of a late-night dorm-room bull session of an erudite sort. Fun for the participants, perhaps, but useless.

* * *

Mr. MacAskill warns that once the development of artificial intelligence achieves the state known as artificial general intelligence, or AGI—that is, a state in which machines can perform the tasks that humans assign them at least as well as any human—we will be able to “lock in” bad values. So instead of the kind of future that techno-utopians want, we’ll have 1984. Mr. MacAskill’s solution: Rather than try to lock in our own values, we should create a “morally exploratory world” in which there is a “long reflection: a stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life, working out what the most flourishing society would be.” That sounds familiar, does it not? In fact, it sounds a lot like the liberal order that developed in Europe from the 14th century until now; you know, the one that made a place for a certain young Oxford don to work out his ideas on the good life? That one! Yet when Mr. MacAskill casts around for examples of what a morally exploratory world might look like, he cites the special economic zone in Shenzhen, China, created in 1979 by Deng Xiaoping.

* * *

William MacAskill clearly possesses a powerful mind. In an earlier age he would have made a formidable theologian or an omnicompetent journalist. But the questions to which he has dedicated himself in this book are absurd ones. They admit of no realistic answers because they were formulated by people who know little and pretend to know everything. Rarely have I read a book by a reputedly important intellectual more replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere. Never mind what we owe the future; what does an author owe his readers? In this case, an apology.

Expand full comment

I feel like this version of the repugnant conclusion is overly sneaky because it makes use of a different paradox: the slippery slope problem; or the one where the pile of sand eventually turns into a heap of sand.

How do you feel about the following less graded version of the repugnant conclusion:

World A: 1bn people, level 100

World B: 1bn people, level 100 + 1bn people, level 1

World C: 2bn people, level 51

Would I choose World B over World A? Would I choose World C over World B? How do I develop an intuition for this?

Might I pretend that I must live the life of every individual in the system? I think I would rather live [2 lives at level 100 + 1 life at level 1] than [3 lives at level 70]. So, maybe I don't prefer world C to world B.

In fact, this sort of decision might even take place within a single individual's life span. Isn't that delayed gratification?

Expand full comment

There's an interesting point here which is "how much can you actually do to fix X?"

There seems to be three groups of AI Ethics/AI Alignment folks:

1. AI Alignment Researchers that aren't a part of the academic/OpenAI/etc initiatives, that largely just wax philosophical about how to fix an AI they do not actually understand. Mathematicians I respect deeply constantly are on about this and are like "look, we have implemented robust controls on AI trading systems before, so claiming that it's impossible doesn't seem to hold out. The economy still fundamentally exists, ergo I win". This argument strikes me as pretty strong that AI Safety is both being thought about more carefully than people think it is when it comes to giving AI control of important things *and* that the people who talk a lot about AI Safety aren't actually that well researched in how AI systems actually work.

2. AI Ethics people who are concerned with whether the artificial superintelligence would be invited back to a San Francisco dinner party.

3. AI Researchers who know exactly how AI works, know the stakes, and are just working on increasing capabilities while feeling slightly bad about it.

4. Me, the vaguely educated layman in all of these felds. Why do I feel that way about these groups? It's simple - people like EY make glaring errors when they talk about AI, errors that a well versed researcher would not have made. Meanwhile, the algorithmic justice league and their ilk are constantly busy writing systems on the bases of "reducing bias" (where "bias" means - "any time the latent manifold contains patterns I don't like") . That second one actually seems more admirable than my cynical reading - if it were not the equivalent of putting out a house fire on top of an in-flight ICBM. The third one comes from extensive interactions with the folks at Eleuther, Anthropic, etc.

In all of these cases, you would have to *change the fundamental social reward curves that make these groups behave as they do* or *select the best one and donate to it*.

However - I have a feeling that the capability of any given organization or movement is hard capped at time X, so at time X, the return on your donation dollar, activist volunteering, or Straussian memeplex-shelfing is going to be capped. Ideally a longtermist would consider when judging "needs of the present" versus "needs of the future" exactly *when* this effect appears. Even if you fully accept "sacrifice all present needs for future needs", there's more than one place where the approach is relevant so it's important to be able to make decisions about whether or not you're hardcapped.

Expand full comment