An Alternative Population Ethics

Let’s say we believe consequentialism and utilitarianism. Roughly, we hold that the morality of an action depends only on its consequences, and the consequences we care about are only changes in the amount of happiness in the world. We wish to increase the amount of happiness, as much as possible. We are rational, so every time we act we will choose the action that we expect to lead to the outcome with maximum happiness. (This is essentially what effective altruists try to do.)

Now, the world we live in has many human (and maybe nonhuman) individuals with moral weight. To make our life easier, let’s say we can quantify the happiness (also called welfare or utility) of an individual. That utility is a real number, positive if the individual’s life is worth living, zero if it is neutral, and negative if the individual is better off not existing. Let’s also assume we know, for each action, the happiness outcome of each individual.1

If none of the actions change identity or number of the agents that exist, choosing is easy: the best action is that which maximises the sum2 of happiness.

Population axiologies

However, what if the actions can also change the number of individuals in the population? That happens in many cases: deciding whether to have a child personally, designing the structure of money incentives and tax changes to encourage/discourage children, giving away or selling contraception… For these cases, we need to be able to compare populations with a different number of individuals.

Thus we get to the problem of population axiology: give an “at least good as” complete, transitive ordering of all possible populations, solely based on people’s utility.

Borrowing notation from Arrhenius (2000). Let $A$, $B$, $C$, $B \cup C$, … be populations, or sets of individuals. We may indicate the population size: $A_n$ is a population of $n$ individuals: $a_1, a_2, \dots, a_n$. The numerical representation of $a_i$’s welfare is given by the function $v(a_i)$. Since it is transitive, we can safely represent the ordering with a “$\geq$” symbol. The problem can then be written: for any two populations $A$ and $B$, is $A_n \geq B_m$ or $A_n \leq B_m$ ?

Let’s take care of the case where $n \neq m$. We could, again, just care about the sum of utility, so $A \geq B$ if and only if $\sum_{a_i \in A} v(a_i) \geq \sum_{b_i \in B} v(b_j)$. But this opens us up to the Repugnant Conclusion (Parfit, 1984) (video explanation by Julia Galef): a population of $10^{100}$ people with welfare 1 each would be much better than a population of 10 billion with welfare $10^{80}$ each.

And this, to most people, is unpalatable. However, some philosophers think we should accept the Repugnant Conclusion. Tännsjö (2002) argues that when we picture a “life barely worth living” we picture precarious hygienic conditions, lack of food, and depending on the weather for your food crop. But actually it is our first-world materially comfortable lives that are barely worth living. Also, we do not care about the obvious goodness of the enormous increase in population because of our scope insensitivity bias.

Still, the Repugnant Conclusion created the problem of population axiology and led to the following findings.

Impossibility theorems

Many alternatives to summing utility have been proposed: counting average utility. Having each additional life have less weight, thus having any sum of bounded utilities converge. Having a “critical level” which marks a minimum for well-being that is above a life barely worth living. See introduction by Greaves (2016).

Arrhenius showed, in 2011, the impossibility of a satisfactory population ethics. After his already discouraging results in 2000, his findings challenged population ethics even as moral framework. Both papers present really compelling conditions that a population axiology ought to satisfy, represent them formally, and mathematically prove them impossible to hold in the same axiology. I will describe the conditions of the 2011 paper:

  • The ordering is complete (no two populations are incomparable) and transitive ($A \geq B$ and $B \geq C$ implies $A \geq C$).

  • Given two populations of the same size and with all the individuals having the same welfare, the one with higher sum3 of welfare is not worse.

  • Let $A_n$ be a population of $n-1$ people with very high welfare and one person with welfare $w$. Let $B_n$ be a population of $n-1$ people with very low positive welfare and one person with welfare $w+1$. Then, for any population $X$, there exists an $n$ sufficiently large such that $A \cup X \geq B \cup X$. Roughly, if you change enough people from low positive to really high welfare in a population, that compensates for one individual with a slightly lower welfare.

  • Let $B_n$ be a population of $n$ individuals with welfare $w$. Let $A_n$ be a population of an individual with welfare $w+1$, and $n-1$ individuals with welfare $c$, $c < w$. Given a population $X$ with the welfare of everyone being between $c$ and $w$, there exists a sufficiently large $n$ such that $B_n \cup X \geq A_n \cup X$. Roughly, making a lot of people better off at the expense of very little of one person’s welfare, such that these people mentioned are all equally happy, does not make things worse.

  • There is at least one negative welfare level $w<0$, such that adding any number of people with positive welfare $v>0$ is not worse than adding a person with welfare $w$.

  • For any population $X$, there exists a perfectly equal very-high-welfare population $A_n$, and very-negative-welfare population $B_m$, such that $X \cup A_n \geq X \cup B_m \cup C_l$, for all, arbitrarily large, values of $l$. Population $C$ is composed of very low positive lives. This means avoidance of a Repugnant Conclusion on steroids: not only is there a vast number of barely-worth-living lives, but there also exist many lives that are much worse than not existing.

Reiterating: it is impossible to have all these properties in the same population axiology.

Side-stepping the issue: dropping the “axiology” requirement

Notice that the desirable properties of the “population axiology” can be framed as transitions. This is a very similar form to (one of) the intended use(s) for them, namely, deciding which is the most moral of any two actions with known4 outcomes. Let’s take that strong hint, and re-frame the choice not as deciding between any two populations, but between any two populations starting from the status quo.

I propose ascribing moral weight only to individuals who currently exist. When taking a decision, choose the outcome that maximises the sum5 of the utility of the people who are alive at that moment. This way of looking at population ethics is unfortunately not an axiology. An axiology would allow us to output a single number measuring the utility of the whole population, and using that as a drop-in replacement for single-agent utility in all the game theory (and game practice, also called Artificial Intelligence, AI) that we know. Instead, our criterion is only useful when an agent is taking a decision based on a model of the world and on future outcomes of their current action.

We can illustrate this more easily within the somewhat more restricted framework of AI as online search (Russell and Norvig, 2009, Section 4.5). In the following figure, we show a decision tree. At each node we face a decision, with as many branches as possible courses of action we can take. We are currently in the bottom-most node, the root, at time instant $t=0$. At each time step, the individuals in our population have a positive or negative experience. At $t=3$ we have the expected return6 7 for each chain of actions. We depict surrounded in green the utility of the individuals who are alive when taking the decision, and in purple the utility of those who start existing later. The dotted green-purple line marks the moment where, in the possible future of each path, at least one new individual exists.

Illustration of our decision process for populations.

The decision process can be viewed as an adversarial search (Russell and Norvig, 2009, Section 5) process. In this game we have two players, represented as blue and green edges in the picture. Blue plays in states where at least one new individual has been born, that is, above the dotted line. Green plays below. Green plays at the root node, which means the agent that is deciding is Green. Green cares about maximising the return of the people that exist in $t=0$, that is, the return we depict surrounded in green. Blue wants to maximise the utility of all the people, that is the sum of the numbers surrounded in green and purple. The edge that is highlighted in the relevant player’s colour is the edge the player would choose if he were to play from that edge’s parent node. Notice that, in the right-most outcome, Blue never plays, since nobody is born.

In our example, we choose the bubble with (13, -4), because it is the one with the highest Green-utility out of the options Blue leaves available.

This system behaves as if we decided to increase the utility of the existing people now, knowing that, when new people are born, we will care about them too. It resembles our current society’s treatment of babies: you can use contraception to prevent a baby’s existence all you want, but once you have a baby, if you mistreat it or kill it you shall face severe consequences. Barring contraception accidents, children exist because their parents thought they would be happier raising them.8

Does this avoid the Repugnant Conclusion?

Yes. We start from a population of $n$ individuals with a level of well-being. The Repugnant Conclusion entails sacrificing these individuals utility to create many more individuals with low positive welfare, so at some point we have to allow a birth. If we do not have a choice over this it is useless to apply any moral theory. Thus we can decide, at some point in the middle, to bring into existence an individual, or not to. We know that, if we bring into existence that individual, we will be morally forced to decrease the others’ well-being for it. Thus, we decide for the good of the existing beings, and do not bring into existence the individual.9

This can fail when the decider lacks sufficient foresight, and incorrectly predicts that the new individual will not decrease the others’ well-being. But that is not a problem of our decision criterion, rather, of our prediction mechanism. Consider the following situation of an agent maximising its utility:

Insufficient prediction power causes bad decision.

The agent chooses the right-most node, because it thinks it will lead to 14 return. But, one time-step later than the agent has looked, it gets very low reward, thus very low return. This should be alleviated in some way, but that is not in the scope of this post.

So when the prediction mechanism works as intended, and the world does not take such drastic turns, the decision criterion avoids the Repugnant Conclusion.

Does this fail in any other way?

Yes. The criterion as-is needs at least one amendment. Currently, an agent deciding by this criterion will not hesitate to create arbitrarily many lives with negative utility, to increase the utility of the people who are alive just a little. One may think the commitment to a life when it exists would fix that, but that is not the case. If Green can restrict its decisions in some way, it can take all the necessary decisions before anyone is born, and thus completely prevent Blue from playing. In a real world example, Green becomes president, passes laws mandating the construction of inhuman but cheap factory farms, laws preventing anyone from messing with those farms, and then removes itself out of office before any farm is in operation, for example by committing a crime. Thus Green will become very sad when the farms start working, but its own purpose at the moment of decision will be fulfilled, namely, maximum utility for the beings who were alive back then.

We could forbid the creation of any lives with negative welfare entirely. But that would outlaw really positive scenarios, such as one person working a shitty job and having nobody to talk to, in order for $10^{100}$ people to live lives with $10^{100}$ welfare. No, instead, we need to forbid bad trade-offs.10

The solution is: when a purple life has negative welfare, it gets counted into the green utility. Thus, future net negative lives have the same weight as present net negative lives. This reflects the Asymmetry Intuition: we may be indifferent about making happy people, while we prefer making people happy, not making miserable people, and not making people miserable. (Benatar, 2008)

In practice, if we end up having an AI that needs to make this sort of decisions, I suspect this will lean towards just not creating any lives, and creating lives with negative welfare should be rare (what for? Humans are slow and inefficient at working, and they decrease utility when unhappy). If it’s an economist or politician, they can use this rule, but they wouldn’t do anything truly absurd; they are human.

Closing thoughts

Based on insights from AI and a few moral intuitions, I have created a decision criterion for a changing population, when considering possible futures. The criterion avoids the Repugnant Conclusion, behaves like a simple sum of utilities when population doesn’t change, mirrors our current accepted morals for treatment of nonexistent individuals, and doesn’t seem to have any glaring holes.

Of course, such holes may be found, if you do find one please comment, email me, or otherwise let me know.

References

(I recommend the Google Scholar Button browser add-on for comfortable fetching of papers, but I’ve included links for convenience here.)

[Blog] Armstrong, S. Integral versus differential ethics.

[PDF] Arrhenius, G., 2000, An Impossibility Theorem for Welfarist Axiologies. Economics and Philosophy, 16: 247–266.

[PDF] Arrhenius, G., 2011. The impossibility of a satisfactory population ethics. Descriptive and normative approaches to human behavior.

[Book] Benatar, D., 2008. Better never to have been: The harm of coming into existence. Page 32. Oxford University Press. Chicago

[PDF] Greaves, H., 2016. Population axiology.

[Book] Parfit, D., 1984. Reasons and persons. OUP Oxford.

[Book] Russell, S. and Norvig P., 2009. Artificial Intelligence: A Modern Approach. 3rd. Prentice Hall Press, Upper Saddle River, NJ, USA. ISBN: 0136042597, 9780136042594.

[PDF] Tännsjö, T., 2002. Why we ought to accept the repugnant conclusion. Utilitas, 14(3), pp.339-359.


  1. I sometimes write “people” in this essay because it reads natural, but you may substitute for “individuals with moral weight” every time. Also, since we quantify utility as numeric, a moral weight can be really a numeric weight, to multiply that individual’s utility by. ^
  2. Or average, if you prefer, it makes no difference. Quick proof: Let $A$ be the set of possible actions, $U(i, a) : I \times A \mapsto \mathbb{R}$ is the function giving the utility, a real number, of individual $i$ when taking action $a$. Given the set of individuals $I$ and actions $A$, clearly $$ \forall a, a’ \in A \, \sum_i U(i, a) \geq \sum_i U(i, a’) \iff \frac{\sum_i U(i, a)}{|I|} \geq \frac{\sum_i U(i, a’)}{|I|} $$ since $|I|=|I|$. ^
  3. Again, or average, see footnote 2. ^
  4. If you believe in expected value utility, you can extend that to population outcomes with probabilities. While I don’t know of any alternative to it, expected value utility has some problems, namely Pascal’s Mugging. ^
  5. See footnotes 2 and 3. Aren’t populations with the same number of individuals wonderful? ^
  6. This term is taken from the artificial intelligence literature, specifically from sequential decision processes, which is what are facing here. The idea is that, at each time step $i$, the agent receives a numeric reward $r_i$. The agent also has a discount rate $\gamma$, $0 \leq \gamma \leq 1$, which models how much the agent cares about receiving rewards sooner.11 The return at time $t$, $R_t$, is the sum of all discounted future rewards until then, that is: $$ R_t = r_0 + \gamma r_1 + \dots + \gamma^t r_t = \sum_{i=0}^t \gamma^i r_i $$ The path of actions that has the highest return at time $t$, when the agent’s decisions end ($t$ can be infinite too, if the decisions never end), is the path that the rational agent prefers. ^
  7. Why are we using returns and not utility over the whole lifetime now? We have gone down into the realm of making decisions while the population is changing, while individuals experience pain or pleasure and are born and die. It makes much more sense to think about populations and the pain or pleasure experienced during a time interval of each individual, rather than during their whole lifetime. This is the notion of returns, and thus our use of them, and why we will use the term somewhat interchangeably with “utility” around this passage. ^
  8. Be it because they enjoy the raising, because their child being happy makes them happy, or to make their own parents happy. But it all reflects back to the parents’ own projected well-being. ^
  9. This idea very much resembles that of Integral Ethics (Armstrong, 2014), which was inspiration for this post’s approach. ^
  10. Edit 20 November 2016: Thanks to /u/bayen for pointing out a problem with the previous rule. ^
  11. In economic terms, $\gamma$ is the time preference of the agent. Note that the terms “high” and “low” are reversed for time preferences and discount rates. A high $\gamma$ means a low time preference, and vice versa. ^