Install Theme

youzicha:

nostalgebraist:

But my answer is “population ethics is a hard and unsolved problem, and we shouldn’t pretend it isn’t.”  The fact that we seem to be forced to accept at least one of these “absurd” conclusions* should suggest we don’t have a very good handle on how this subject should be done.  More work needs to be done to either find a loophole in the logic or to actually argue why certain intuitions are more worth keeping than others.  I am wary of being too ready to bite a bullet just because at the very first glance it looks a little nicer than the other bullet, and especially wary of people who insist that biting their preferred bullet is the “obvious” choice; these are not decisions to be made lightly.

To be clear, I don’t believe that picking torture over dustspecks is obviously right. The whole issue is really confusing, I don’t know which side is right or wrong, and it’s probably a good thing that the thought experiment does not fit inside our universe.

What I disagree with is the claim that it is obvious that one should pick dustspecs, or that rejecting dustspecks is “one of the more obvious awful ethical ideas that’s come out of LW” (your original claim). Or, a fortiori, that someone who picks the torture option is unusually horrible at moral philosophy and probably needs to be dissuaded from ever trying to apply any of their atrocious theories.

You’re right that I’m being kind of inconsistent here – I attack Yudkowsky for calling his conclusion obvious, yet I use that very word in reference to mine!

I guess, on reflection, what I don’t like is not that Yudkowsky chooses torture, but that he says (or said in 2007 anyway) that it was the “obvious” answer.  This kind of tendency to barrel forward into counter-intuitive conclusions worries me.  (It is characteristic of his entire approach, such as the way he spends so much time on AI risk, which most people find a counter-intuitive worry.)

I guess I think there is an asymmetry here: I am less scared by someone who calls common-sense conclusions “obvious” than someone who calls counter-intuitive bullet-biting conclusions “obvious.”  I guess I tend to think that there are more bad ways of modifying common sense than good ways.  There are many potential ways for an abstract theory to be wrong, perhaps disastrously so in some cases, while common sense is only as bad as it’s proven itself to be over the course of generations.

(via youzicha)

Maybe it’s just that I’m tired but it feels like I’m speaking a totally different language from everyone who’s trying to argue with me about dust specks.  I don’t understand why population ethics should be different for large numbers because “humans don’t understand them” (??), nor how we are supposed to derive the large number version if we “don’t understand large numbers.”

I am trying to make simple arguments based on things like symmetry.  Utilitarians normally like these: they tend to say “everyone should be weighted equally in the moral calcuius” and happily apply this to a world with 7 billion people without saying “wait, what if intuitions about equality break down because we don’t understand large numbers?”  Why is the large number thing coming up in this particular case?

I’m too tired for this stuff, I’m going to bed.

Bringing up dust specks again was completely my fault, if anyone needs someone to blame :P

sinesalvatorem.tumblr.com →

ozymandias271:

nostalgebraist:

multiheaded1793:

ozymandias271:

chroniclesofrettek:

templesforwhores:

I got super derailed by the part in the last Captain America movie where he steals his old costume from the museum and the elderly security guard says, “I am so fired.”

Like, does it get sorted out? What if he really needed that…

I like villains, but vaguely think that LW-ish types are unusually horrible at moral philosophy, and probably need to be dissuaded from ever trying to apply any of their atrocious theories. This includes you,
ozymandias271
.

P.S.: oddly enough, EY himself makes a much better impression in that regard. Idk. He just does.

P. P. S. Pragmatism » contrived moral dilemma porn.

Wait, EY himself makes a better impression that his fanbase on ethical issues?  Eliezer “torture > 3^^^3 dust specks” freakin’ Yudkowsky?

I’m not trying to be antagonistic here, I’m just curious, since that particular application of naive “add ‘em up” utilitarianism seems like one of the more obvious awful ethical ideas that’s come out of LW.  I mean, I guess it’s harmless in the sense of being obviously impractical (and thus an example of “contrived moral dilemma porn”), but it suggests an ethical view that could potentially imply bad things in more realistic contexts (in that he thinks a bunch of inconveniences can add up to a single truly awful thing, even if every single person being inconvenienced would be like “this is nbd, please don’t torture anyone for my sake”).

I disagree? Like… every time you drive a car, you’re saying “I accept this (admittedly fairly small) risk that I will kill someone else in exchange for getting to the place I want to go ten minutes faster.” People make that sort of tradeoff— small chance of Really Horrible Thing happening to another person vs. large chance of being inconvenienced— all the time. They just don’t like having it pointed out to them that they’re making it.

It’s true that people do that, although I would be more likely to consider it a common flaw in human behavior than something to base a theory of ethics on.

More broadly, the problem here is not with comparison of Small Badthings to Big Badthings (or not merely with that), it’s also with adding up Badthings across people.  This seems to me like it runs into an issue with preferences: if I were one of the dust speck people in the thought experiment, I would say, “please let me and all the others get the dust speck, rather than torturing that one person.”  If all 3^^^3 people were copies of me, they’d all say that, and indeed I’d imagine most people would intuitively say that unless they’d read some argument to the contrary.  So we have this strange situation where 3^^^3 people disprefer this choice individually, yet it’s still supposedly “better,” which seems bizarre.  (I’m not strictly talking about the psychological harm the people would suffer if told about the tortured person; even if they didn’t know what was going on, we could still say they’d prefer the dust speck choice if they had been asked, so it’s still bizarre to say that this thing no one would prefer if asked is the best choice.)

Note that we don’t have this problem if I am asked “do you want a one in 3^^^3 probability of someone else being tortured or a one in one probability of getting a dust speck in your eye?”  I might still choose the latter, but only out of caution; it wouldn’t feel like “oh no, that option is terrible!”, the way I’d feel in the previously mentioned case.  So this is different from (and easier than) population ethics.  Population ethics is hard and no one knows how to do it.

(via bpd-dylan-hall-deactivated20190)

sinesalvatorem.tumblr.com →

multiheaded1793:

ozymandias271:

chroniclesofrettek:

templesforwhores:

I got super derailed by the part in the last Captain America movie where he steals his old costume from the museum and the elderly security guard says, “I am so fired.”

Like, does it get sorted out? What if he really needed that…

I like villains, but vaguely think that LW-ish types are unusually horrible at moral philosophy, and probably need to be dissuaded from ever trying to apply any of their atrocious theories. This includes you,
ozymandias271
.

P.S.: oddly enough, EY himself makes a much better impression in that regard. Idk. He just does.

P. P. S. Pragmatism » contrived moral dilemma porn.

Wait, EY himself makes a better impression that his fanbase on ethical issues?  Eliezer “torture > 3^^^3 dust specks” freakin’ Yudkowsky?

I’m not trying to be antagonistic here, I’m just curious, since that particular application of naive “add ‘em up” utilitarianism seems like one of the more obvious awful ethical ideas that’s come out of LW.  I mean, I guess it’s harmless in the sense of being obviously impractical (and thus an example of “contrived moral dilemma porn”), but it suggests an ethical view that could potentially imply bad things in more realistic contexts (in that he thinks a bunch of inconveniences can add up to a single truly awful thing, even if every single person being inconvenienced would be like “this is nbd, please don’t torture anyone for my sake”).

(via multiheaded1793)

As far as Roko’s zany ideas go, everyone remembers the Basilisk, but too many people forget about his “Quantum Billionaire Trick,” introduced in the same post

The gist of the idea was that you make some gamble against astronomical odds using a quantum random number generator, so that if you believe in Many Worlds there will be some copy of you that generates the winning answer and reaps lots of money.  It probably won’t be you, but that copy will then go on to use its vast wealth to fund the creation of a Friendly AI.  Then (if I’m understanding Roko’s post correctly – it is pretty confusing) you get the AI to generate a huge number of uploaded software copies of your mind based on a version of you prior to making the gamble; thus “almost all” of the versions of you in the multiverse will be versions that remember deciding to make the gamble and then ending up in uploaded AI utopia.

So if you just make the gamble and lose, that sucks, but there are billions of you who remember deciding to make the gamble and then waking up in upload land (sort of an extension of the old thought experiment where you agree to have your brain copied and then wake up as the copy, because after all, someone has to wake up as the copy – thus it would be stupid to agree beforehand to make the copy do things you wouldn’t want to do, because for all you know you might be destined to be it, etc).

I think this was supposed to allow you to avoid being punished by the Basilisk, though I’m not entirely clear on the logic there.

Roko sure is a character!

uncrediblehallq:

sharkyminimalist:

uncrediblehallq:

wanderingwhore:

amaranththallium:

sharkyminimalist:

I have been to LW meetups where people tried to make another member uncomfortable by talking about sex. I have been to meetups where people sneak off into a back room. I have been to a meetup where nudity was proposed in complete seriousness. I have had people try to pressure me into letting them use my house for their sex things.

I have heard people absolutely gloat about how uncomfortable they made someone in public doing BDSM things. I have heard people talk about the very public sex they have. I have had people act as if I am ridiculous when I try to stop a too sexual conversation or when I point out that maybe going from complete inexperience to hardcore BDSM is risky and probably really bad to push on another person.  

I’ve heard people say nasty, judgmental things about other people not being kinky enough. I’ve had people try to get me to share publicly personal things, or share them without permission. I have had someone literally throw a fit that I wouldn’t have sex with them, not having any established relationship or even friendship.

I’ve heard more stories, not mine, of this nature multiple times.

This is all JUST in the context of less wrong.

Perhaps I have just had the shittiest luck ever, and maybe this is just me seeing only the tiny fraction of bad things but not getting any of the good things. If that’s true and I’ve gotten the wrong impression completely, I’m glad that it’s better than I thought.

Insisting I don’t care what other people do consensually in private probably won’t change your mind. But honestly, I don’t care. The problem here is other people have expected me to do things their way many times and seem to not find it a problem.

Okay, that is horrible. Have you tried talking about your concerns with someone like thepokeduck? I bet she’d be on your side.

(via uncrediblehallq)

mttheww asked: I feel like the rationalist community on tumblr has somewhat of an above-average tendency to use random text posts as springboards for long, drawn out arguments that don't have a lot to do with the op--does this jibe with your observations?

Kind of?  I think that’s a thing that’s pretty common tumblr thing in general.

But I also think the rationalists have an above-average tendency to reblog posts to argue with the OP when there’s no indication the OP wanted an argument, which is disapproved of on tumblr more generally.

As with so many things, though, this is related to a problem with tumblr’s design — you have a bunch of people who like to argue with one another, and a system based on reblogging, and there’s no way to fork off a separate thread to argue with your friend about the post they reblogged (which I think is what these people want to do) without spamming your argument all over the OP’s dash.  (I’ve always thought it would be nice to have a social site that distinguished between “rebloggable content” and “conversations” and gave the two different rules)

Two memorable dreams for the price of one last night, thanks to medication-induced awakening at 3 AM!

Dream #1: I went to some kind of Less Wrong meetup (apparently the obsession is spreading to my dreams now).  Everyone there was really nice.  At the end someone gave me the keys to their house so I could go by and drop something off there, because they were going to sleep somewhere else (by dream logic, this was all completely normal behavior and not unusually trusting).

What was not normal was their house: it was this giant, opulent mansion that looked more like the White House or some other important public building than a private residence.  There was a courtyard filled with carefully cultivated gardens and fountains (the kind with statues) which was so large that it took me a while to find my way to the actual entrance of the house, which was up a long set of steps.  I had the thought, “wait, are all these Less Wrong people super-rich?  That would certainly change my view of them!”

Inside the house, I ran into a cat, who said “hello” to me in a very realistic meow-based approximation of speech.  I went on to have a stilted conversation with it, in which it appeared to be relying on a set of broadly applicable phrases (although it never actually repeated itself verbatim).  I concluded that its owner had elaborately trained it to follow a set of rules similar to a chatterbot program.  I muttered “you’re just a Markov chain!” and it vigorously denied this.

Dream #2: I was at my parents’ house, and I went to the bathroom.  After washing my hands, I turned and saw a person, frozen in place near the doorway in an awkward, “inhuman-looking” posture.  It looked like the kind of shot you’d get in movies to show that someone had just seen a ghost or other scary supernatural being.  I was freaked out for a while, but after a minute or so of standing still, the guy cheerfully introduced himself to me.  He turned out to be a really nice guy, and ended up having dinner with me and my family.  His appearance in my house, and the “perfectly still contorted posture” thing, were never explained.

Incidentally, if you interpret my full Wechsler score as an IQ (even though the two subscores are far apart enough that that is not advised), I’m in something like the lowest 15% of respondents to the Less Wrong 2014 survey, IQ-wise

Clearly I am just not smart enough to grok Yudkowsky’s brilliance, this explains everything :P