Honestly though I do find thinking about these kind of ethical puzzles really interesting, because it reveals things about ethical sentiments that I would never have thought about otherwise
E.g. I think part of why the dust speck scenario seems so obviously preferable is that if I were someone who knew someone was being tortured to prevent me getting a dust speck in my eyes, I would feel horrible about it. Knowing that would ruin my life. And if someone offered me the option to excise this knowledge from my mind, I wouldn’t at all feel like this option would make everything OK again — so clearly it isn’t the knowledge that I think matters, even if it’s what directly causes unhappiness
I feel like the possibility for empathy needs to be somehow incorporated into the utilitarian framework even in cases where empathy isn’t actually felt — because that’s what people would want, if they knew
(On a more boringly technical note, there are unresolved problems with summing up utilities over more than one person the way E.Y. wants to in those posts, although I am sure he is aware of this fact and has some proposed solution)
Could we view this at least in part as a distinction between Act utilitarianism and Rule utilitarianism? It seems like “don’t torture innocent people” is a rule that in the great majority of cases produces the greatest good, and this corresponds fairly well to the notion that “following rules that tend to lead to the greatest good will have better consequences overall than allowing exceptions to be made in individual instances, even if better consequences can be demonstrated in those instances.” (Source: wikipedia.)
I’m that rare person who’s actually pretty pro-Omelas. To my mind the lesson of the story is mostly one about the lengths to which people will go in order to avoid feelings of guilt and moral responsibility. There are thousands, maybe millions of children right now living in conditions a hell of a lot like those of that one child in Omelas, without even the direct provision of happiness to others to give their suffering justification. Yet the vast majority of people who are sure that they would leave Omelas, and pat themselves on the back for their rectitude in so imagining, spend very little time concerned for this huge number of actual suffering children. Or, to put it in a humorously crass manner, the exchange rate on child misery is much worse in the real world than in Omelas, and yet none of these righteous readers seem to care all that much. (This relates to your post a few days back about your frustration, which I share, at people who think that being as concerned about people far away as people close by is somehow contrary to morality and compassion.)
Really, though, I think any moral system that is purely utilitarian or purely consequentialist is going to lead to ethical injunctions that are at best counterintuitive and at worst utterly repugnant. Yud’s argument is a good example of this for pure utilitarianism, but our discussion a couple years back about Kantian morality in law and governance, and the notion of a government that won’t shoot down a hijacked plane even if doing so would save tens of thousands of lives is an analogous case for consequentialism/deontology. Or, say, the notion that because acts are morally neutral there’s nothing inherently wrong with causing the extinction of the human race as long as one *truly* didn’t mean to.
Act vs. rule utilitarianism could be one way to look at it, but it’s distinct from my objection. I don’t even think that the torture scenario serves the “greater good” better than the dust speck scenario, so I don’t think the problem with it is that it’s an exceptional case. I feel like a correct, non-broken version of act utilitarianism should choose dust specks over torture, too.
I’m still not sure about the best way to formalize that feeling. Some of it may have to do with empathy. The simplest utilitarian treatment of empathy is that it’s just another kind of pain, albeit one that can do good by causing people to help others. If the 3^^^3 people in the torture scenario knew about the tortured person, and felt awful about it, this would be bad, but (on this account) this problem could be removed by just preventing them from knowing about it. This doesn’t actually feel like it addresses how we feel about empathy, though. Most people feel that ordinary pains like toothaches are things they straightforwardly want to avoid, but I don’t think most people want to avoid empathetic pain in the same way. Part and parcel of the emotion is a desire for its cause to be gone; when in empathetic pain we don’t think “I wish I didn’t hurt” but “I wish they didn’t hurt.” It seems to me – though I don’t know how to formalize this – that the fact that the 3^^^3 people would feel bad if they knew must be morally significant, even if they don’t know, because if they did know they wouldn’t feel like a removal of their empathy would solve the problem.
In other words, I guess I think that morality involves minimizing the number of situations that could cause empathetic pain if people knew about them. Note that the 3^^^3 dust specks are not such a situation. No one would naturally feel that this is a tragedy. (Yudkowsky would, but not intuitively, only on the basis of theory. I’d bet that he wouldn’t really be in empathetic pain if such a thing happened, though who really knows.)
It’s possible that my focus on “things that cause empathetic pain” vs. “things that don’t” here is just a way of getting at the more general ideas that not all pains can be lumped together. EY’s analysis involves the idea that small pains when added up equal large pains, and that if one doesn’t think this way one runs into absurdities (along the lines of “arbitrarily many people in pain state X are better than one person in pain state X+epsilon, where X and X+epsilon are very similar”). However, this doesn’t seem to accord with our actual experience of pain. For instnace, some pains feel “bearable” and others feel “unbearable”; dust specks are a prototypical instance of the former and torture is the latter pretty much by definition. And – even if our own individual lives – we tend to act almost as if infinitely many bearable pains are less bad than one unbearable pain. Bearable pains are merely annoying, while unbearable pains feel fundamentally wrong or unjust – they interact with the moral sense in a way bearable pains don’t. (If I am feeling a very intense kind of pain I will usually have thoughts along the lines of “this shouldn’t be allowed to happen to people” – I don’t have anything like these thoughts, not even scaled-down versions of them, when dealing with bearable pains.)
I always interpreted the Omelas story as asking the reader “yeah, you feel like you’d be one of the ones who walk away, but would you really be? Really?” I have no idea if that was the intent, though.
(via dagny-hashtaggart)
