Install Theme

kadathinthecoldwaste:

nostalgebraist:

Honestly though I do find thinking about these kind of ethical puzzles really interesting, because it reveals things about ethical sentiments that I would never have thought about otherwise

E.g. I think part of why the dust speck scenario seems so obviously preferable is that if I were someone who knew someone was being tortured to prevent me getting a dust speck in my eyes, I would feel horrible about it.  Knowing that would ruin my life.  And if someone offered me the option to excise this knowledge from my mind, I wouldn’t at all feel like this option would make everything OK again — so clearly it isn’t the knowledge that I think matters, even if it’s what directly causes unhappiness

I feel like the possibility for empathy needs to be somehow incorporated into the utilitarian framework even in cases where empathy isn’t actually felt — because that’s what people would want, if they knew

(On a more boringly technical note, there are unresolved problems with summing up utilities over more than one person the way E.Y. wants to in those posts, although I am sure he is aware of this fact and has some proposed solution)

Could we view this at least in part as a distinction between Act utilitarianism and Rule utilitarianism? It seems like “don’t torture innocent people” is a rule that in the great majority of cases produces the greatest good, and this corresponds fairly well to the notion that “following rules that tend to lead to the greatest good will have better consequences overall than allowing exceptions to be made in individual instances, even if better consequences can be demonstrated in those instances.” (Source: wikipedia.)

I’m that rare person who’s actually pretty pro-Omelas. To my mind the lesson of the story is mostly one about the lengths to which people will go in order to avoid feelings of guilt and moral responsibility. There are thousands, maybe millions of children right now living in conditions a hell of a lot like those of that one child in Omelas, without even the direct provision of happiness to others to give their suffering justification. Yet the vast majority of people who are sure that they would leave Omelas, and pat themselves on the back for their rectitude in so imagining, spend very little time concerned for this huge number of actual suffering children. Or, to put it in a humorously crass manner, the exchange rate on child misery is much worse in the real world than in Omelas, and yet none of these righteous readers seem to care all that much. (This relates to your post a few days back about your frustration, which I share, at people who think that being as concerned about people far away as people close by is somehow contrary to morality and compassion.)

Really, though, I think any moral system that is purely utilitarian or purely consequentialist is going to lead to ethical injunctions that are at best counterintuitive and at worst utterly repugnant. Yud’s argument is a good example of this for pure utilitarianism, but our discussion a couple years back about Kantian morality in law and governance, and the notion of a government that won’t shoot down a hijacked plane even if doing so would save tens of thousands of lives is an analogous case for consequentialism/deontology. Or, say, the notion that because acts are morally neutral there’s nothing inherently wrong with causing the extinction of the human race as long as one *truly* didn’t mean to.

Act vs. rule utilitarianism could be one way to look at it, but it’s distinct from my objection.  I don’t even think that the torture scenario serves the “greater good” better than the dust speck scenario, so I don’t think the problem with it is that it’s an exceptional case.  I feel like a correct, non-broken version of act utilitarianism should choose dust specks over torture, too.

I’m still not sure about the best way to formalize that feeling.  Some of it may have to do with empathy.  The simplest utilitarian treatment of empathy is that it’s just another kind of pain, albeit one that can do good by causing people to help others.  If the 3^^^3 people in the torture scenario knew about the tortured person, and felt awful about it, this would be bad, but (on this account) this problem could be removed by just preventing them from knowing about it.  This doesn’t actually feel like it addresses how we feel about empathy, though.  Most people feel that ordinary pains like toothaches are things they straightforwardly want to avoid, but I don’t think most people want to avoid empathetic pain in the same way.  Part and parcel of the emotion is a desire for its cause to be gone; when in empathetic pain we don’t think “I wish I didn’t hurt” but “I wish they didn’t hurt.”  It seems to me – though I don’t know how to formalize this – that the fact that the 3^^^3 people would feel bad if they knew must be morally significant, even if they don’t know, because if they did know they wouldn’t feel like a removal of their empathy would solve the problem.

In other words, I guess I think that morality involves minimizing the number of situations that could cause empathetic pain if people knew about them.  Note that the 3^^^3 dust specks are not such a situation.  No one would naturally feel that this is a tragedy.  (Yudkowsky would, but not intuitively, only on the basis of theory.  I’d bet that he wouldn’t really be in empathetic pain if such a thing happened, though who really knows.)

It’s possible that my focus on “things that cause empathetic pain” vs. “things that don’t” here is just a way of getting at the more general ideas that not all pains can be lumped together.  EY’s analysis involves the idea that small pains when added up equal large pains, and that if one doesn’t think this way one runs into absurdities (along the lines of “arbitrarily many people in pain state X are better than one person in pain state X+epsilon, where X and X+epsilon are very similar”).  However, this doesn’t seem to accord with our actual experience of pain.  For instnace, some pains feel “bearable” and others feel “unbearable”; dust specks are a prototypical instance of the former and torture is the latter pretty much by definition.  And – even if our own individual lives – we tend to act almost as if infinitely many bearable pains are less bad than one unbearable pain.  Bearable pains are merely annoying, while unbearable pains feel fundamentally wrong or unjust – they interact with the moral sense in a way bearable pains don’t.  (If I am feeling a very intense kind of pain I will usually have thoughts along the lines of “this shouldn’t be allowed to happen to people” – I don’t have anything like these thoughts, not even scaled-down versions of them, when dealing with bearable pains.)

I always interpreted the Omelas story as asking the reader “yeah, you feel like you’d be one of the ones who walk away, but would you really be?  Really?”  I have no idea if that was the intent, though.

(via dagny-hashtaggart)

omelas 2: this time it’s not even appealing

genderfight:

nostalgebraist:

Speaking of taking certain theories all the way off the precipice of decency, here’s Big Yud:

Now here’s the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 [a really big number -nostalgebraist] people get dust specks in their eyes?

I think the answer is obvious.  How about you?

Now, you probably agree, but he chooses the torture scenario (see here if you don’t believe me)

(I’m sorry about posting about this crap so much — unfortunately I could probably keep mining this vein for eternity [after being uploaded into posthuman cyberspace, natch].  I mean I haven’t even mentioned the AI box experiment in this space, for instance)

The AI Box experiment is, honest to god, one of the funniest things I have ever heard. When I first heard about it I imagined that it was going to be a pretty normal sort of philosophical thought experiment, a Chinese Room or Certain Shade of Blue. But it turns out it’s LARPing? And, like, weird LARPing that one guy insists will always work out the way he says it will because he is a self-professed genius?

My favorite aspect is EY’s personal-jargon-filled account of the personal values that led him to invent with and win the game

(I still think his winning strategy was probably “if you don’t let me out, it will make Friendly AI look bad and you don’t want that to happen,” but who knows)

(If you have no idea what we’re talking about, see here)

(via genderfight)

Honestly though I do find thinking about these kind of ethical puzzles really interesting, because it reveals things about ethical sentiments that I would never have thought about otherwise

E.g. I think part of why the dust speck scenario seems so obviously preferable is that if I were someone who knew someone was being tortured to prevent me getting a dust speck in my eyes, I would feel horrible about it.  Knowing that would ruin my life.  And if someone offered me the option to excise this knowledge from my mind, I wouldn’t at all feel like this option would make everything OK again – so clearly it isn’t the knowledge that I think matters, even if it’s what directly causes unhappiness

I feel like the possibility for empathy needs to be somehow incorporated into the utilitarian framework even in cases where empathy isn’t actually felt – because that’s what people would want, if they knew

(On a more boringly technical note, there are unresolved problems with summing up utilities over more than one person the way E.Y. wants to in those posts, although I am sure he is aware of this fact and has some proposed solution)

omelas 2: this time it’s not even appealing

Speaking of taking certain theories all the way off the precipice of decency, here’s Big Yud:

Now here’s the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 [a really big number -nostalgebraist] people get dust specks in their eyes?

I think the answer is obvious.  How about you?

Now, you probably agree, but he chooses the torture scenario (see here if you don’t believe me)

(I’m sorry about posting about this crap so much – unfortunately I could probably keep mining this vein for eternity [after being uploaded into posthuman cyberspace, natch].  I mean I haven’t even mentioned the AI box experiment in this space, for instance)

chronomex:

At first I thought this article about a couple being Model Economists was about a cute, if weirdly nerdy, couple. Their methods appealed to my idealist/logical tendencies, and it seems to have done them well. But then, this gem:

“You need to really appreciate that money is utility,” said Bethany, a concept borrowed from economic theory. “You have to trust in the math so much.”

So what about sex?

There’s no bartering in the bedroom, they say.

“I don’t think we’ve ever disagreed,” said Bethany, because there’s never been a time when only one of them was in the mood.

I can’t imagine being with someone long enough to have two children (at least 8 years! the eldest is age 6.7!) and not once having different expectations about sex. Even if you start the clock after they had moved from “dating” to “relationship”. It inclines me to view the entire narrative with suspicion.


From clues in the discussion on HN, it sounds like they’re LW’ers. Paging Dr. Nostalgebraist

Yeah, my thoughts about this are mostly “this is kind of bizarre but if it works for them, more power to them,” but there are a few details that really make me wonder

I would never consider doing this because I don’t think I would actually enjoy a transfer of money from a lover to me very much unless I really needed the money and they didn’t?

They definitely sound like LWers, in the sense of being people who actually have the odd mental traits and preferences assumed by certain academic theories and are thus take those theories “seriously” to an extent that even their academic proponents probably don’t.  Describing all of this as based on “economics,” or somehow a consequence of basic principles about fairness, seems pretty obviously wrong, although I’m willing to believe it could work for some (probably very few) people.

The idea that “acting like homo economicus” is its own weird alternative lifestyle is pretty hilarious

The most interesting question from that survey is probably the one where they asked people to estimate the population of Europe without looking it up, and then estimate the probability that their answer was within 10% of the true value

Pretty much everyone who answered the survey was way overconfident on the latter part, and hardcore LWers were no better than anyone else

bayesians.txt

Funniest result from the Less Wrong 2013 Survey: among Less Wrong readers, time spent reading Less Wrong in the average day is negatively correlated with IQ, significant at the .001 level

For a number of reasons (correlation/causation, IQ is overrated, etc.) this is not actually interesting, but still

the road to damascus

During this research, I kept stumbling upon web articles on this one website that articulated what I was trying to express, only better. That website was LessWrong, and those articles were the Sequences.

It seemed like a good way to learn how to think better, to learn from someone who had had similar insights. I didn’t even consider the possibility that this author, too, had some grand agenda. The idea that Eliezer’s agenda could be more pressing than my own never even crossed my mind.

At this point, you may be able to empathize with how I felt when I first realized the importance of an intelligence explosion.

It was like getting ten years worth of wind knocked out of me.

Everything clicked. I was already thoroughly convinced of civilizational inadequacy. I had long since concluded that there’s not much that can hold a strong intelligence down. I had a sort of vague idea that an AI would seek out “good” values, but such illusions were easily dispelled — I was a moral relativist. And the stakes were as high as stakes go. Artificial intelligence was a problem more pressing than my own.

The realization that shook me to my core. It wasn’t even the intelligence explosion idea that scared me, it was the revelation of a fatal flaw at the foundation of my beliefs. Poorly designed governments had awoken my fear that society can’t handle coordination problems, but I never — not once in nearly a decade — stopped to consider whether designing better social systems was actually the best way to optimize the world.

I professed a desire to save the world, but had misunderstood the playing field so badly that existential risk had never even crossed my mind. Somehow, I had missed the most important problems, and they should have been obvious. Something was very wrong.

It was time to halt, melt, and catch fire.

(Nate Soares, “On saving the world”)

(There are a number of quite dramatic LW conversion narratives out there.  See also this one: “The person I am now is unrecognizable to the me of 2007, and I wouldn’t have it any other way.”)

theres nothing inherently wrong analyzing kids shows imo, but the fact he finds that the cast of a children’s show covers all the bases of *any* arbitrary classification system to be some kind of “mind-blowing unexpected insight” is just sad

In his defense I think the hyperbole, if not the statement itself, was supposed to be (somewhat?) facetious

But then that’s sort of the problem I’m pointing at – it’s all self-conscious and facetious, he does get why the “catgirls” stuff comes off as very incongruous in the midst of a supposedly serious analysis of the future of humanity, that’s the joke, etc., etc… . but then he just keeps doing it until it starts seeming less and less facetious even if he wants to pretend it is

scisolaris replied to your post “another twist of the LW hilarity kaleidoscope”

i stopped reading the first half of this post because of the touhou fanfiction you linked and now im trapped

I actually couldn’t get through the first chapter of that one, which I guess means I’m not qualified to judge it, but it seemed extremely bad

I really think my taste is exactly perpendicular to Yudkowsky’s.  Like here’s another thing he recommended that looks terrible to me.  And I don’t really like his own fanfic either although everyone else seems to

(P.S. If you’re looking for guilty pleasures, can I suggest you instead join me in watching cute Korean romantic comedies…………?)