It’s surprisingly difficult to come up with a satisfactory theory of population ethics.
Man, I have the complete opposite impression. Simple total utilitarianism seems obviously correct to me , at least on a system-2 level.
Simple average utilitarianism seems correct. All you get as downside is the ‘sadistic conclusion’, which is obviously true anyway and ought to fall out of any well-functioning ethical system.
I don’t think the sadistic conclusion (defined in this paper) is obviously true, although it strikes me as much less bad than the repugnant conclusion (it’s the difference between “repugnant” and “kinda weird”).
But I think none of these theories is complete without a clear sense of what positive and negative utility mean (in the real world) and where the zero point is supposed to be located. Much of the confusion over these “conclusions” arises from the fact that people have no clear picture of what a “life (not) worth living” actually looks like. But a lot of the content of the ethical theory lies in how you define this, since in practice, it determines whether you think a person’s existence is a good thing or not. (In total utilitarianism this is always the case, while in average utilitarianism this is only approximately true, for large populations, but in practice we are faced with a large global population.)
I talked a lot more about this here. Specifying a zero point for utility, in human terms, seems really hard, and I’m not satisfied with any utilitarian theory that hasn’t done that work.
You don’t have to specific a zero point for utility for anything except total utilitarianism (although I agree that having to do so is a serious problem for total utilitarianism).
In average utilitarianism, whether a person’s existence is a good thing is dependent on the average of everyone else’s utility. You can call that the zero point if you like, but it’s more a case of being able to pick any zero point you like, because choosing a zero point is just applying a constant offset to everyone’s utility (which won’t change any utility comparisons, and hence won’t affect any decisions).
Of course, the so-called “sadistic conclusion” becomes much less striking of a problem you acknowledge that a “life worth living” isn’t a coherent concept*, and you can just do the math on whether a given population change increases or decreases average utility. That’s a point in favor of average utilitarianism, not against it!**
Likewise, the various versions of person-affecting ethics (of which I am a fan***) don’t require you to calculate a zero point. You don’t even have to do an absolute interpersonal utility comparison (ie, this person has higher utility than that person): you’re only ever concerned with changes in utility (though of course you still have to be able to do interpersonal comparisons on the changes).
*Or, if you define “a life just barely worth living” as “a life with exactly average utility”, the sadistic conclusion doesn’t follow from average utilitarianism, because you’ll never want to create a below average life instead of any number of above average lives.
**The weird part of average utilitarian population ethics is the part where a whether human on Earth having a baby is good or bad is decided by how happy the aliens in the Andromeda Galaxy are.
***Because they avoid the Andromeda Galaxy conclusion.
No, you’re right – I wasn’t thinking correctly about average utilitarianism there.
My qualms with A.U. are basically that there are cases where my intuitions seem more “total,” just like my qualms with T.U. are that there are cases where my intuitions seem more “average.” The former are cases where certain lives really do seem intrinsically bad, such as people in agony who would commit suicide if they could, but can’t. (Cf. Parfit’s two/three hells.)
Overall my objections to AU are far less worrying than my objections to TU, but for the contingent, practical reason that my problems with AU come up much less often in the cases I happen to be faced with. AU has problems with people who want to commit suicide but can’t; TU has problems with everyone, in that it needs to specify a zero point, and that the usual low placement of the zero point (“life is worth living”) demands that we place an absurdly high value on the creation of children.
But I don’t feel that AU is the correct theory; if I lived in one of Parfit’s hells then I would find the implications of AU abhorrent. It just so happens that I don’t.
(ETA: on top of all of this, we have the problem of putting hard numbers on utility differences IRL. Tradeoff questions only get you so far)
I almost want to shout “have you heard the good news about the person-affecting principle?”, but it’s not clear to me if your only problem is that A.U. makes the creation of people in the lesser-but-still-worse-than-death Hell B actively good (which person-affecting morality does away with, because creating a new person in Hell B isn’t good for anyone in Hell A, even if it raises the average utility of the A+B system), or your intuition is that creation of people in Hell B is actively evil (which person-affecting morality doesn’t provide, at least not straightforwardly, though it allows the ongoing existence of those people to be considered morally worse than allowing them to commit suicide).
I’ve heard the good news, but I do think that creating people in Hell B is actively evil. The basic feeling behind person-affecting views is appealing to me, but it seems vulnerable to this basic problem of “callousness to persons who will exist”: we ignore wishes we know people will have in the future simply because they don’t exist now.
The idea I want to capture is “nonexistent people can’t have preferences about whether to be created or not, but it’s wrong to create a person if you know they’ll want to be un-created, but won’t be able to.”
I have a feeling that there is some simple and elegant answer here involving asymmetries between creation and destruction. A nonexistent person keeps not existing until someone chooses to create them; an existing person keeps existing until someone or something destroys them. I am getting that sort of tingling, tip-of-your-tongue feeling I get when I feel like I am glimpsing the answer to a math problem intuitively, but don’t how how to formalize it yet. (I thought about it for 15 minutes last night without getting anywhere, so I’m going to shelve it for now lest it become a new obsession.)
(via fnord888)


