Install Theme

mttheww asked: what would you say is the best way for someone who doesn't give a fuck about, say, cryogenics or singularity--or transhumanism in general, really--to read lw and get something useful out of it? or is stuff like that not really that big a part of lw anyway? (I've never read lw before and pretty much all I know about it is what I've gleaned from tumblr)

Stuff like that is not a huge part of it.

The core material of Less Wrong is “The Sequences,” a giant set of blog posts by Eliezer Yudkowsky that range from fundamental (arguably pretty trivial) philosophical arguments to polemical expositions of quantum mechanics to “fun theory” (attempts to speculate about how to make a transhuman-ish future enjoyable and not boring) to weird aphoristic advice about self-improvement and “becoming a better rationalist” that often ends up sounding either like Yoda or a wise mentor character in a shonen anime.

It’s a massively mixed bag and which parts of it, if any, will be useful or interesting to you depend on your preferences.

A few starting points: “37 ways words can be wrong” (and links therein) is kind of a hub for the fundamental philosophy stuff, which I generally find pretty agreeable and sensible, though you may or may not find it useful.  And the Quantum Physics Sequence is a cool, unconventional way of explaining quantum physics, written from an adamant “many-world interpretation” perspective.  It’s a controversial sequence because actual physicists are divided on whether the many-world interpretation is correct or not, but if you’re not a physicist I think it can give you a nice alternative perspective to the stuff you’ve probably heard about particles and waves and uncertainty.  (Just don’t take it too seriously.)

Anonymous asked: tbh every tumblr/lw fight is weird as hell, because it invariably just involves defending things maybe two people actually believe to the death because hastily drawn vague contingent borders are more important than not declaring people beyond the reach of reason and acting like they're zombies

I’m not sure I understand what you mean?  (What are these rarely believed “things” that are being defended?)

The only tumblr/LW fight I’ve personally witnessed is the recent one with storming the ivory, and I think basically what was going on there was that storming the ivory cannot stand the way LW people talk when they argue and so once the conversation moved into even slightly argumentative territory there was no hope for any reconciliation or reconnection.  Not sure how that jibes (or doesn’t jibe) with your assessment.

“it’s not a big deal if I upset people from 4chan, they’re like barely even people”

theunitofcaring:

“Sheldon Cooper runaround’
"Martians”

this is fucking code for ‘neurodivergent people”

this is fucking code for ‘autistic people’

and not even good fucking code, because I never catch subtext and I’m catching this loud and clear

we do not deserve to exist, we are robots, we are aliens, we are punchlines

barely even people

I GET IT, OKAY? I GET THE FUCKING MESSAGE

barely even people

Sheldon Cooper

barely even people

you’re talking about me

You know, I get so mad every time I see storming the ivory on my dash, and at some point I realized that I could just blacklist their name and stop this from happening

And then instantly realized that I didn’t really want to do this because I honestly enjoy how exquisitely bad their posts are, it’s perfect hatereading material

They’re so condescending and elitist and think of everything in such simplistic and one-dimensional and nuance-free terms, despite constantly asserting that they stand for the opposite of all that, their entire web brand is just so perfectly terrible, and there are times spite is the only enjoyable emotion available to me, so … I swear I’m feeling something like actual kismesissitude here

image

(FYI: if it bothers you that I’m for hijacking your angry post with this expression of joy, I apologize.  I know you’re closer the the issues STI talks about than I am and probably don’t enjoy any aspect of this experience at all.  If you want me to remove my reblog and post this as its own separate post, please let me know.)

Anonymous asked: RE: Robots.txt, if MIRI's works results in a singularity (non-trivial probability), then people will want to understand Eliezer Yudkowsky's historical context -- including his popular critics.

If anyone wants to personally archive my posts about Yudkowsky/Bayes/etc. for this purpose, feel free.

I don’t want to open myself up to the Internet Archive because it would make it harder to do a certain thing I sometimes do with this blog, which is to make posts about personal stuff I want to talk to tumblr people about but don’t want out there forever, and then delete them later.  I understand that this is inherently a risky procedure because someone could always archive the posts before I delete them, but I accept that risk.  I don’t feel any responsibility to increase the risk by letting myself be automatically archived.  If some people feel that, say, my Yudkowsky posts should be archived, they can do that themselves.

Anonymous asked: Is this stance distinguishable from "the hacked Jennifer Lawrence pictures exist no matter what you and I do, and there’s nothing wrong with using information that’s out there, especially if you want to, like, masturbate"?

Well, one difference is that those were, as you say, hacked.

If you put something on the public internet deliberately (which Lawrence didn’t), you are responsible for the fact that it is public.  Even if the internet archive didn’t exist, someone could still download your content and then put it up later.

I realize that this is a complicated subject and there are cases that have their own logic.  E.g. if someone says they don’t want porn blogs to reblog their selfies, I don’t think porn blogs should reblog their selfies.  But that’s because reblogging is a specific action above and beyond simply viewing the content.  It’s reasonable to say “don’t reblog this” about something you’ve made public in a way it is not reasonable to say “don’t look at this.”  (Analogously, I’m open to arguments that what I say or do about the old Yudkowsky posts is dickish or whatever, but I’m not open to arguments that merely acknowledging their existence or their public status is dickish.)

I’ve made plenty of embarrassing posts on the internet.  A lot of them are under old usernames that I don’t mention for a reason, but you could probably find some if you searched my current username, even.  And if someone did find out an old username and started talking about stupid posts I made on forums when I was 14, I would think “well, that’s annoying,” but not “you mustn’t do that!  This is an injustice!  Please stop!”  I made that content public and even if I wish I hadn’t, I have accepted the fact that I am responsible for everything I make public on the internet.  Ultimately, whatever happens as a result is on me.

Anonymous asked: Can you please edit your robots.txt file? Your blog must be archived on the Internet Archive for posterity's sake.

Must it?  Why?  I mean why my blog in particular as opposed to any other tumblr?  Who do you imagine is going to be looking at it, and when, and why?

If you really mean this sincerely, could you explain the reasoning a bit more?

My sense is that this isn’t a real request, but instead a jab at me (and possibly others) for looking up old Yudkowsky writing on the Internet Archive.  But look: the Internet Archive exists no matter what you and I do, and there’s nothing wrong with using information that’s out there, especially if it’s relevant to something important like whether a charity is worth giving money to.

And if you don’t want your stuff archived, which many don’t, well, you can always use a robots.txt.  If someone didn’t and now regrets it, that sucks for them, but the information is now out there for better or for worse.  Refusing to look at it would not do any good (under the plausible assumption that Yudkowsky is not reading these tumblr posts, it won’t even make him happier, because he won’t know one way or the other).

su3su2u1-deactivated20160226 asked: Its not just HPMOR, its like whenever Yud writes about science he goes off the rails. This is from the essay you linked: "You should be familiar with the Design Signature of natural selection. optimization by the incremental recruitment of fortunate accidents, following pathways in fitness gradients which are adaptive at each intermediate point and which are directed in the maximally adaptive direction at each intermediate point." Gradient descent is IN NO WAY the signature of evolution.

I dunno, it depends on what you mean?  Say, the replicator equation (a simple model of selection, with no mutation) can be formulated as gradient flow in an appropriate metric.

I guess it’s a big step from saying that to saying that it’s the “Design Signature” of natural selection, but then I’m not really sure what the phrase “Design Signature” means.  Certainly not every gradient descent process is an instance of natural selection, so it would be silly to say “ah, look, gradient descent, that means natural selection has happened here.”

So the implication can’t be “all gradient descent is natural selection.”  But “all natural selection is gradient descent” might be true in some technical sense.

Anyway, if you leave aside the strange phrase “Design Signature,” I get the sense that he’s mostly just saying that natural selection is not teleological (i.e. anything it builds has to get built by a sequence of steps that each individually increase fitness).  He does go beyond that to say that the steps are maximally fitness-increasing, which might be true at least in some sense (see above), although I don’t know if it holds true for models that incorporate mutation.

I agree with you that this is typical Yudkowsky technobabble, in that it’s ambiguous because it uses non-standard terminology, and the possible interpretations range from “true but trivial” to “possibly true and interesting but should be stated more precisely” to “false.”

su3su2u1:

nostalgebraist:

theunitofcaring replied to your post: The “effectiveness of MIRI” debate see…

He was, I think, 19 when that was written.

I’m hearing conflicting things about this.  You and slatestarscratchpad say he wrote it when he was 19.  A friend of su3su2u1’s says that whenever it was written, it was still being circulated (as a job posting) in 2006.  And it cites LOGI, which wasn’t written until 2002, which seems to mean it couldn’t have been written until 2002 (when EY was 22/23)?

So, I dunno.

Since there seems to be uncertainty, I took a look at the wayback machine, and it looks like it was on the SIAI website in 2006.  

(via su3su2u1-deactivated20160226)

theunitofcaring replied to your post: The “effectiveness of MIRI” debate see…

He was, I think, 19 when that was written.

I’m hearing conflicting things about this.  You and slatestarscratchpad say he wrote it when he was 19.  A friend of su3su2u1’s says that whenever it was written, it was still being circulated (as a job posting) in 2006.  And it cites LOGI, which wasn’t written until 2002, which seems to mean it couldn’t have been written until 2002 (when EY was 22/23)?

So, I dunno.

The “effectiveness of MIRI” debate seems to have died down, and I didn’t really want to get into it anyway, but I did just remember that there was one thing I did want to link.

I think anyone interested in the “effectiveness of MIRI” question, and in the question of exactly what sort of organization MIRI is, should read the entirety of Yudkowsky’s essay So You Want To Be A Seed AI Programmer.

In fact, it’s probably worth reading even if you aren’t interested in those issues, because it is a fascinatingly strange and offputting document.  I don’t know exactly when it was written, but it refers to “SIAI,” the organization that would later become MIRI.  And it’s very clear about its goals: the actual creation of an actual Friendly AI savior machine (technically, the “seed” that would self-modify into one).  At the time this was written, Yudkowsky’s vision for SIAI was not a academic research institute or an awareness-raising institute – it was a team of superhuman, super-ethical workaholic super-geniuses, ascetically devoted to a single task.  Literally (and hilariously) analogized to the Fellowship of the Ring.

And note too the odd choice of background knowledge he wants these superheroes to have.  They need to know “evolutionary psychology” and “information theory” and “Bayesian statistics.”  What about algorithms, computational complexity – what about actually knowing how to program AI?  There’s a great moment of bathos when you reach the “computer programming” subsection of the background knowledge section and it includes things like “Java programming (that’s probably what we’ll end up doing it in).”  Our savior machine will be written in Java?  Or:

“Any kind of experience working with complicated dynamic data patterns controlled by compact mathematical algorithms - some of the interior of the AI may end up looking like this”

This might as well be a string of randomly chosen buzzwords.  Elsewhere in the essay Yudkowsky asserts that a seed AI programmer must essentially devote themselves completely, body and soul, to the all-important task of creating the seed AI.  But what is the promising project idea that deserves such devotion?  Something “written in Java” involving “complicated dynamic data patterns” (as opposed to simple, static data patterns?) and “compact mathematical algorithms” (much better than non-mathematical algorithms, I assure you).

This document is absurd.  It boggles the mind.  I kind of wonder if it is some sort of hoax or mean parody, although if so it is a very skilled imitation of Yudkowsky’s voice.

And yes Yudkowsky and SIAI (now MIRI) have changed in the many years since this document has been written.  But I think it is important to consider their track record of following through on claims about future performance.  Yudkowsky did not say “I will found a modest research institute that trickles out somewhat interesting math preprints at a slow rate by academic standards.”  He said “I will found a dream team of fantasy novel heroes who will use their burning force of will to create a savior machine.”  And now we’re arguing over what he actually did is good enough or not.  Either way, though, it's definitely not what he said he was going to do.

Why should we trust him this time?  He refers to donors in “So You Want To Be A Seed AI Programmer”; presumably some poor souls actually gave him money in the hope of helping him establish his dream team.  He didn’t do that, and he’s still asking for money.  Take that into account.