If we could do something to lower the probability of the human race going extinct,1 that would be really good. But how good? Is preventing extinction more like “saving 8 billion lives” (the number of people alive today), or “saving 80 billion lives” (the number who will be alive over the next 10 generations) … or "saving 625 quadrillion lives" (an Our World in Data estimate of the number of people who could ever be born) ... or “saving a comically huge number of lives" (Nick Bostrom argues for well over 10^46 as the total number of people, including digital people, who could ever exist)?
More specifically, is “a person getting to live a good life, when they otherwise would have never existed” the kind of thing we should value? Is it as good as “a premature death prevented?”
Among effective altruists, it’s common to answer: “Yes, it is; preventing extinction is somewhere around as good as saving [some crazy number] of lives; so if there’s any way to reduce the odds of extinction by even a tiny amount, that’s where we should focus all the attention and resources we can.”
I feel conflicted about this.
- I am sold on the importance of specific, potentially extinction-level risks, such as risks from advanced AI. But: this is mostly because I think the risks are really, really big and really, really neglected. I think they’d be worth focusing on even if we ignored arguments like the above and used much more modest estimates of “how many people might be affected.”
- I’m less sold that we should work on these risks if they were very small. And I have very mixed feelings about the idea that “a person getting to live a good life, when they otherwise would have never existed” is as good as “a premature death prevented.”
Reflecting these mixed feelings, I'm going to examine the philosophical case for caring about “extra lives lived” (putting aside the first bullet point above), via a dialogue between two versions of myself: Utilitarian Holden (UH) and Non-Utilitarian Holden (NUH).2
This represents actual dialogues I’ve had with myself (so neither side is a pure straw person), although this particular dialogue serves primarily to illustrate UH's views and how they are defended against initial and/or basic objections from NUH. In future dialogues, NUH will raise more sophisticated objections.
This is part of a set of dialogues on future-proof ethics: trying to make ethical decisions that we can remain proud of in the future, after a great deal of (societal and/or personal) moral progress. (Previous dialogue here, though this one stands on its own.)
A couple more notes before I finally get started:
- The genre here is philosophy, and a common type of argument is the thought experiment: "If you had to choose between A and B, what would you choose?" (For example: "is it better to prevent one untimely death, or to allow 10 people to live who would otherwise never have been born?")
- It's common to react to questions like this with comments like "I don't really think that kind of choice comes up in real life; actually you can usually get both A and B if you do things right" or "actually A isn't possible; the underlying assumptions about how the world really works are off here." My general advice when considering philosophy is to avoid reactions like this and think about what you would do if you really had to make the choice that is being pointed at, even if you think thI ae author's underlying assumptions about why the choice exists are wrong. Similarly, if you find one part of an argument unconvincing, I suggest pretending you accept it for the rest of the piece anyway, to see whether the rest of the arguments would be compelling under that assumption.
- I often give an example of how one could face a choice between A and B in real life, to make it easier to imagine - but it's not feasible to give this example in enough detail and with enough defense to make it seem realistic to all readers, without a big distraction from the topic at hand.
- Philosophy requires some amount of suspending disbelief, because the goal is to ask questions about (for example) what you value, while isolating them from questions about what you believe. (For more on how it can be useful to separate values and beliefs, see Bayesian Mindset.)
Dialogue on “extra lives lived”
To keep it clear who's talking when, I'm using -UH- for "Utilitarian Holden" and -NUH- for "non-Utilitarian Holden." (In the audio version of this piece, my wife voices NUH.)
Let’s start here:
- Let's say that if humanity can avoid going extinct (and perhaps spread across the galaxy), the number of people who will ever exist is about one quintillion, also known as "a billion billion" or "10^18." (This is close3 to the Our World in Data estimate of the number of people who could ever live; it ignores the possibility of digital people, which could lead to a vastly bigger number.)
- If you think that’s 1% likely to actually happen, it’s an expected value of 10^16 people, which is still huge enough to carry the rest of what I'm going to say.
- You can think of it like this: imagine all of the people who exist or ever could exist, all standing in one place. There are about 10^10 such people alive today, and the rest are just “potential people” - that means that more than 99.999% of all the people are “potential people.”
- And now imagine that we’re all talking about where you, the person in the privileged position of being one of the earliest people ever, should focus your efforts to help others. And you say: “Gosh, I’m really torn:
- I think if I focus my efforts on today’s world, maybe I could help prevent up to 10,000 untimely deaths. (This would be a very high thing to aim for.)
- Or, I could cause a tiny decrease in extinction risk, like 1% of 1% of 1% of 1%. That would help about 100 million of you.4 What should I do?”
- (Let’s ignore the fact that helping people in today’s world could also reduce extinction risk. It could, but it could also increase extinction risk - who knows. There are probably actions better targeted at reducing extinction risk than at helping today’s world, is the point.)
In that situation, I think everyone would be saying: “How is this a question? Even if your impact on extinction risk is small, even if it’s uncertain and fuzzy, there are just SO MANY MORE people affected by that. If you choose to focus on today’s world, you’re essentially saying that you think today’s people count more than 10,000 times as much as future people.
“Now granted, most of the people alive in your time DO act that way - they ignore the future. But someday, if society becomes morally wiser, that will look unacceptable; similarly, if you become morally wiser, you'll regret it. It's basically deciding that 99.999%+ of one’s fellow humans aren’t worth worrying about, just because they don’t exist yet.
“Do the forward-looking thing, the future-proof thing. Focus on helping the massive number of people who don’t exist yet.”
I feel like you are skipping a very big step here. We’re talking about what potential people who don’t exist yet would say about giving them a chance to exist? Does that even make sense?
That is: it sounds like you’re counting every “potential person” as someone whose wishes we should be respecting, including their wish to exist instead of not exist. So among other things, that means a larger population is better?
I mean, that’s super weird, right? Like is it ethically obligatory to have as many children as you can?
It’s not, for a bunch of reasons.
The biggest one for now is that we’re focused on thin utilitarianism - how to make choices about actions like donating and career choice, not how to make choices about everything. For questions like how many children to have, I think there’s much more scope for a multidimensional morality that isn’t all about respecting the interests of others.
I also generally think we’re liable to get confused if we’re talking about reproductive decisions, since reproductive autonomy is such an important value and one that has historically been undermined at times in ugly ways. My views here aren’t about reproductive decisions, they’re about avoiding existential catastrophes. Longtermists (people who focus on the long-run future, as I’m advocating here) tend to focus on things that could affect the ultimate, long-run population of the world, and it’s really unclear how having children or not affects that (because the main factors behind the ultimate, long-run population of the world have more to do with things like the odds of extinction and of explosive civilization-wide changes, and it's unclear how having children affects those).
So let’s instead stay focused on the question I asked. That is: if you prevent an existential catastrophe, so that there’s a large flourishing future population, does each of those future people count as a “beneficiary” of what you did, such that their benefits aggregate up to a very large number?
OK. I say no, such “potential future people” do not count. And I’m not moved by your story about how this may one day look cruel or inconsiderate. It’s not that I think some types of people are less valuable than others, it’s that I don’t think increasing the odds that someone ever exists at all is benefiting them.
Let’s briefly walk through a few challenges to your position. You can learn more about these challenges from the academic population ethics literature; I recommend Hilary Greaves’s short piece on this.
Challenge 1: Future people and the “mere addition paradox”
So you say you don’t see “potential future people” as “beneficiaries” whose interests count. But let’s say that the worst effects of climate change won’t be felt for another 80 years or so, in which case the vast majority of people affected will be people who aren’t alive today. Do you discount those folks and their interests?
No, but that’s different. Climate change isn’t about whether they get to exist or not, it’s about whether their lives go better or worse.
Well, it’s about both. The world in which we contain/prevent/mitigate climate change contains completely different people in the future from the world in which we don’t. Any difference between two worlds will ripple chaotically and affect things like which sperm fertilize which eggs, which will completely change the future people that exist.
So you really can’t point to some fixed set of people that is “affected” by climate change. Your desire to mitigate climate change is really about causing there to be better off people in the future, instead of completely different worse off people. It’s pretty hard to maintain this position while also saying that you only care about “actual” rather than “potential” people, or “present” rather than “future” ones.
I can still take the position that:
- If there are a certain number of people, it’s good to do something (such as mitigate climate change) that causes there to be better-off people instead of worse-off people.
- But that doesn’t mean that it’s good for there to be more people than there would be otherwise. Adding more people is just neutral, assuming they have reasonably good lives.
That’s going to be a tough position to maintain.
Consider three possible worlds:
- World A: 5 billion future people have good lives. Let’s say their lives are a 8/10 on some relevant scale (reducing the quality of a life to a number is a simplification; see footnote for a bit more on this5).
- World B: 5 billion future people have slightly better than good lives, let’s say 8.1/10. And there are an additional 5 billion people who have not-as-good-but-still-pretty-good lives, let’s say 7/10.
- World C: 10 billion future people have good lives, 8/10.
My guess is that you think World B seems clearly better than World A - there are 5 billion “better-off instead of worse-off” future people, and the added 5 billion people seem neutral (not good, not bad).
But I’d also guess you think World C seems clearly better than World B. The change is a small worsening in quality of life for the better-off half of the population, and a large improvement for the worse-off half.
But if World C is better than World B and World B is better than World A, doesn’t that mean World C is better than World A? And World C is the same as World A, just a bigger population.
I admit my intuitions are as you say. I prefer B when comparing it to A, and C when comparing it to B. However, when I look at C vs. A, I’m not sure what to think. Maybe there is a mistake somewhere - for example, maybe I should think that it’s bad for additional people to come to exist.
That would imply that the human race going extinct would be great, no? Extinction would prevent massive numbers of people from ever existing.
That is definitely not where I am.
OK, you’ve successfully got me puzzled about what’s going on in my brain. Before I try to process it, how about you confuse me more?
Challenge 2: Asymmetry
Sure thing. Let’s talk about another problem with the attempt to be “neutral” on whether there are more or fewer people in the future.
Say that you can take some action to prevent a horrible dystopia from arising in a distant corner of the galaxy. In this dystopia, the vast majority of people will wish they didn’t exist, but they won’t have that choice. You have the opportunity to ensure that, instead of this dystopia, there will simply be nothing there. Does that opportunity seem valuable?
It does, enormously valuable.
OK. The broader intuition here is that preventing lives that are worse than nonexistence has high ethical value - does that seem right?
Now you’re in a state where you think preventing bad lives is good, but preventing good lives is neutral.
But the thing is, every time a life comes into existence, there’s some risk it will be really bad (such that the person living it wishes they didn’t exist). So if you count the bad as bad and the good as neutral, you should think that each future life is purely a bad thing - some chance it’s bad, some chance it’s neutral. So you should want to minimize future lives.
Or at the civilization level: say that if humanity continues existing, there’s a 99% chance we will have an enormous (at least 10^18 people) flourishing civilization, and a 1% chance we’ll end up in an equally enormous, horrible dystopia. And even the flourishing civilization will have some people in it who wish they didn’t exist. Confronting this possibility, you should hope that humanity doesn’t continue existing, since then there won’t be any of these “people who wish they didn’t exist.” You should, again, think that extinction is a great ethical good.
Yikes. Like I’ve said, I don’t think that.
In that case, I think the most natural way out of this is to conclude that a huge flourishing civilization would be good enough to compensate - at least partly - for the risk of a huge dystopia.
That is: if you’re fine with a 99% chance of a flourishing civilization and a 1% chance of a dystopia, this implies that a flourishing civilization is at least 1% as good as a dystopia is bad.
And that implies that “10^18 flourishing lives” are at least 1% as good as “10^18 horribly suffering lives” are bad. 1% of 10^18 is a lot, as we’ve discussed!
Well, you’ve definitely made me feel confused about what I think about this topic. But that isn’t the same as convincing me that it’s good for there to be more persons. I see how trying to be neutral about population size leads to weird implications. But so does your position.
For example, if you think that adding more lives has ethical value, you end up with what's called the repugnant conclusion. Actually, let’s skip that and talk about the very repugnant conclusion. I’ll give my own set of hypothetical worlds:
- World D has 10^18 flourishing, happy people.
- World E has 10^18 horribly suffering people, plus some even larger number (N) of people whose lives are mediocre/fine/”worth living” but not good.
There has to be some “larger number N” such that you prefer World E to World D. That’s a pretty wacky seeming position too!
That’s true. There’s no way of handling questions like these (aka population ethics) that feels totally satisfactory for every imaginable case.
That you know of. But there may be some way of disentangling our confusions about this topic that leaves the anti-repugnant-conclusion intuition intact, and leaves mine intact too. I’m not really feeling the need to accept one wrong-seeming view just to avoid another one.
“Some way of disentangling our confusions” is what Derek Parfit called theory X. Population ethicists have looked for it for a while. They’ve not only not found it, they’ve produced impossibility theorems heavily implying that it does not exist.
That is, the various intuitions we want to hold onto (such as “the very repugnant conclusion is false” and “extinction would not be good” and various others) collectively contradict each other.
So it looks like we probably have to pick something weird to believe about this whole “Is it good for there to be more people?” question. And if we have to pick something, I’m going to go ahead and pick what’s called the total view: the view that we should maximize the sum total of the well-being of all persons. You could think of this as if our “potential beneficiaries” include all persons who ever could exist, and getting to exist6 is a benefit that is capable of overriding significant harms. (There is more complexity to the total view than this, but it's not the focus of this piece.)
I think there are a number of good reasons to pick this general approach:
- It’s simple. If you try to come up with some view that thinks human extinction is neither the best nor the worst thing imaginable, your view is probably going to have all kinds of complicated and unmotivated-seeming moving parts, like the asymmetry between good and bad lives discussed above. But the idea behind the total view is simple, it just counts everyone (including potential someones) as persons whose interests are worth considering. Simplicity fits well into my goal of systemizing ethics, so that my system is more robust and relies on fewer intuitions.
- Having considered the various options for “which weird view to take on,” I think “the very repugnant conclusion is actually fine” does pretty well against its alternatives. It’s totally possible that our intuitive aversion to it comes from just not being able to wrap our brains around some aspect of (a) how huge the numbers of “barely worth living” lives would have to be, in order to make the very repugnant conclusion work; (b) something that is just confusing about the idea of “making it possible for additional people to exist.”7
- And is it really so unintuitive, anyway? Imagine you learned that some person made a costly effort to prevent your ancestors’ deaths, 1000 years ago, and now you are here today. Aren’t you glad you exist? Don’t you think your existence counts as part of the good that person accomplished? (More of this kind of thinking in this blog post by my coworker Joe Carlsmith.) Is your take on the fact that 10^18 people might or might not get to exist really just “It doesn’t ethically matter whether this happens?”
Maybe part of what’s confusing here is something like:
- I’m not indifferent to extra happy lives - they are better than nothing, after all.
- But if the only or main kind of way I was improving the world was allowing extra happy lives to exist, that wouldn’t be right.
- So maybe extra lives matter up to some point, and then matter less. Or maybe it’s true that “an extra life is a good thing” but not that “lots of extra lives can be more important than helping people who already exist.”
That approach would contradict some of the key principles of “other-centered ethics” discussed previously.
I previously argued that once you think something counts as a benefit, with some amount of value, a high enough amount of that thing can swamp all other ethical considerations. In the example we used previously, enough of “helping someone have a nice day at the beach” can outweigh “helping someone avoid a tragic death.”
If this were a philosophy seminar, I would think you were making a perfectly good case here.
But the feeling I have at this juncture is not so much “Ah yes, I see how all of those potential lives are a great ethical good!” as “I feel like I’ve been tricked/talked into seeing no alternative.”
I don’t need to “pick a theory.” I can zoom back out to the big picture and say “Doing things that will make it possible for more future people to exist is not what I signed up for when I set out to donate money to make the world a better place. It’s not the case that addressing today’s injustices and inequities can be outweighed by that goal.”
I don’t need a perfectly consistent approach to population ethics, I don’t need to follow “rules” when giving away money. I can do things that are uncontroversially valuable, such as preventing premature deaths and improving education; I can use math to maximize the amount of those things that I do. I don’t need a master framework that lands in this strange place.8
Again, I think a lot of the detailed back-and-forth has obscured the fact that there are simple principles at play here:
- I want my giving to be about benefiting others to the maximum extent possible. I want to spend my money in the way that others would want me to if they were thinking about it fairly/impartially, as in the “veil of ignorance” construction. If that’s what I want, then giving that can benefit enormous numbers of persons is generally going to look best. (Discussed previously)
- There’s a question of who counts as “others” that I can benefit. Do potential people who may or may not end up getting to exist? I need a position on this. Once I let someone into the moral circle, if there are a ton of them, they’re going to be the ones I’m concerned about.
- On balance, it seems like “potential people who may or may not end up getting to exist” probably belong in the moral circle. That is, there’s a high chance that a more enlightened, ethical society would recognize this uncontroversially.
I might end up getting it wrong and doing zero good. So might you. I am taking my best shot at avoiding the moral prejudices of my day and focusing my giving on helping others, defined fairly and expansively.
For further reading on population ethics, see:
- Population Axiology - a ~20 page summary of dilemmas in population ethics
- Stanford Encyclopedia of Philosophy on the Repugnant Conclusion - shorter, goes over many of the same issues
- Chapter 2 of On the Overwhelming Importance of Shaping the Far Future
I feel a lot of sympathy for the closing positions of both UH and NUH.
I think something like UH’s views do, in fact, give me the best shot available at an ethics that is highly “other-centered” and “future-proof.” But as I’ve pondered these arguments, I’ve simultaneously become more compelled by some of UH’s unusual views, and less convinced that it’s so important to pursue an “other-centered” or “future-proof” ethics. At some point in the future, I’ll argue that these ideals are probably unattainable anyway, which weakens my commitment to them.
Ultimately, if we put this in a frame of "deciding how to spend $1 billion," the arguments in this and previous pieces would move me to spend a chunk of it on targeting existential risk reduction - but probably not the majority (if they were the only arguments for targeting existential risk reduction, which I don't think they are). I find UH compelling, but not wholly convincing.
However, there is a different line of reasoning for focusing on causes like AI risk reduction, which doesn’t require unusual views about population ethics. That’s the case I’ve presented in the Most Important Century series, and I find it more compelling.
I’m sticking with “extinction” in this piece rather than discuss the subtly different idea of “existential catastrophe.” The things I say to extinction mostly apply to existential catastrophe, but I think that adds needless confusion for this particular purpose. ↩
It's about 1.5x as much as the Our World in Data estimate, and not everyone would call that "close" in every context, but I think for the purposes of this piece, the two numbers have all the same implications, and "one quintillion" is simpler and easier to talk about than 625 quadrillion. ↩
"1% of 1% of 1% of 1%" is 1%*1%*1%*1%. 1%*1%*1%*1%*10^16 (the hypothesized number of future lives, including only a 1% chance that humanity avoids extinction for long enough) = 10^8, or 100 million. ↩
For this example, what the numbers are trying to communicate is that all things considered, some of these lives would rank more highly than others if people were choosing what conditions they would want to live their life under. They're supposed to implicitly incorporate reactions to things like inequality - so for example, World B, which has more inequality than Worlds A and C, might have to have better conditions to "compensate" such that the 5 billion people with "8.1/10" lives still prefer their conditions in World B to what they would be in World A. ↩
(Assuming that existence is preferred to non-existence given the conditions of existence) ↩
This general position does have some defenders in philosophy - see https://plato.stanford.edu/entries/moral-particularism/ ↩