Install Theme

2572 A.D., Matrioshka Brain Gamma-5, Subsection 18593485: Fevered debate rages among posthuman superintelligences over the precise meaning of Saunt Yudkowksy’s infamously cryptic, presumably allegorical remarks about the “Dark Lord Potter Forums”

For a change of pace, here’s Big Yud with the Thursday Nite Rant:

Please ignore all the reviews suggesting that my offer to do the moral equivalent of shaving my head on Youtube if a charity meets its donation goals will bring about the downfall of all Harry Potter fandom everywhere. This is a fake backlash coordinated by a troll forum called Dark Lord Potter which hates hates hates MoR. While I do thank everyone who jumped to this fic’s defense, having these sorts of arguments in the reviews is not the way I want to increase my review count. Thanks to Dark Lord Potter for helping to keep this fic #1, though.

-HPMoR Author’s Notes Ch. 73-75

It seems that someone signing themselves “DLP Anonymous” - i.e. Dark Lord Potter - edited my Wikipedia article to note that I wrote “mediocre Harry Potter fanfiction”. Stay classy, DLP! I’d be tempted to retaliate in kind, only the Dark Lord Potter forum doesn’t have a Wikipedia entry, because they fail the notability test, because nobody ever mentions them in newspapers or anything, because they’re completely unimportant.

-HPMoR Author’s Notes, Ch. 50

I am proud to announce that this fic now features celebrity endorsements from New York Times bestselling authors David Brin and Eric S. Raymond.

(Of course the Dark Lord Potter forum still has this fic in the Recycling Bin.)

-HPMoR Author’s Notes, Ch. 30/31

I think there’s a recurring feature in my life where I encounter charismatic, controversial people who tend to inspire very polarized reactions, and end up feeling conflicted about them, which doesn’t put me in sympathy with either of the poles

Yudkowsky is an example: I give him a lot of shit, but I’d be lying if I said his writing didn’t resonate very intensely with me when I first discovered it.  A lot of his more mundane posts about “rationality” were essentially saying things that I believed but had resigned myself to thinking no else did.  I write about him negatively now because I wish there were more people who had the qualities I like about him without the many, many qualities I dislike about him.  It’s different from the perspective of someone who just sees him as a risible dweeb.

One of the reasons I like The Instructions so much is that it is about one of these kinds of people, and depicts him as both compelling and scary, and just kind of assumes that the reader will be endlessly fascinated with him.  It’s not lauding him, it’s not satirizing him, it’s not making some point about how he’s good or bad or even somewhere in between, it’s just presenting some personality traits and rhetorical styles that really exist and saying “I hope this stuff is as fixation-worth for you as it is for me”

It’s not like the author is holding back and suspending judgment … more that his judgment of his creation is largely “wow, holy shit”

And I really get that.  I relate to that “wow, holy shit” a lot more than “this person is clearly wrong” or “this person is clearly right” or “the truth is always somewhere in the middle” or “ah, what a complex issue” or etc.

I’ve mentioned this before but I check out a lot of books from the university library, often long before I have time to read them.  Checking out a new book is kind of an easy, costless way to reward myself for getting through the day I guess

Today’s acquisition was Deirdre McCloskey’s “The Bourgeois Virtues” which I checked out b/c everything I’ve read by McCloskey has been gold and it’s supposed to be her magnum opus, but it sounds bizarre and possibly terrible and got mixed reviews at best, and how could that combination not make a person curious

Like check out this table of contents

image

This is going to be an … experience

more on science, creativity and pop bayesians

As a follow-up to my last post: I guess one of the reasons I am wary of “rationality confers worldly success” ideas is that I work in science, and people who talk about rationality conferring worldly success tend to use science as their ideal, or closest-to-ideal, case.  Surely if Less Wrong-style rationality works for anything, it should work for science, right?

But this kind of rationality, which is focused around being unbiased and interpreting evidence in light of your existing ideas (“priors”), largely neglects the part of scientific thinking that involves coming up with new ideas and thinking about them.  (I talked about this a bit at the end of the long Bayesianism post.)  This is a problem, because at least in the kind of work I’ve done – mostly theoretical work involving the methods of mathematical physics (although not always applied to physics) – idea-generation is the majority of the task.  My work doesn’t look like anything like “updating priors in response to evidence,” even approximately.  My work consists of inventing ideas that I can’t yet have priors for because I haven’t thought about them before.  Typically, it goes like this:

  1. I drink a lot of coffee and eat breakfast.  I remind myself what problem I’m trying to solve.
  2. I sit there (or, often, I pace around or go for a walk outside) and, inside my head, repeat to myself brief sets of words connected with the problem.  Ideas about potential solutions to the problem pop into my head as if out of nowhere.
  3. Each time an idea appears, I ask myself questions like “why do I think this will help me solve the problem?” or “can I think of any simple reason why this idea won’t work?”
  4. If an idea passes all the simple tests I can think of to throw at it (which is rare), I will actually start to develop it with pen and paper.  If it works on pen and paper (even rarer), I feel gratified and maybe write some notes about it in LaTeX, then move on to the next step in the project.

None of this process can really be improved by making myself “less biased.”  Once a really good idea comes along, it’s often not hard to show that it works, and I rarely find myself irrationally resistant to it.  Conversely, when I spend too long thinking about a bad idea, it’s not because I was biased in its favor, but that a knock-down argument against it (itself a sort of “idea”) took a long time to occur to me.  The limiting factor isn’t bias, but the need to generate and test numerous bad ideas before I have one good one.  What would make me better at my job is not better judgment, but higher-quality (and more quickly generated) ideas-to-be-judged.

What does “rationality” have to tell us about how to generate new theories?  What I am doing when I have new ideas is not just some sort of brute-force search over all possible concepts – it’s something much more subtle than that, involving intuitions built up over past scientific experience, and involving whatever forces underlie human creativity.  Intelligent machines based on these kinds of principles – on guided idea-generation – would surely outperform those that simply brute-force their way through all possible ideas.  Surely this is a part of any good account of scientific thinking, even utopian, superhuman scientific thinking?  (Even with futuristic computing power, why use the brute force approach if a better one is available?  Why consume huge amounts of energy making your supercomputer run through all possible bad ideas, when a human can do the same effective work if you just give them coffee and breakfast?)

Whenever I read “rationalist” (i.e. Less Wrong-related) writing about how being rationalist should allow you to reap great material gains in the real world, I can’t help but think about how the successful people I’ve encountered in my life generally don’t have anything like the kind of temperament that LW rationalism encourages.

The successful people I know, by and large, are not science fictional mastermind types who concoct complex plans and exploit counter-intuitive second- or- third-order consequences of their actions that most people wouldn’t think about.  They don’t, generally, do many counterintuitive or even clever things.  Mostly, they are just very good at what they do (whether by nature, training, or some combination of the two), and they do it frequently and persistently.  They do not have the kind of temperament that leads to constant worries about second-order effects and unintended consequences and motivated reasoning; if anything they have the opposite, a smilingly unreflective, quiet sort of mind that can let them sit down and grind away at their work for the (n+1)st day without worrying about what the opportunity costs are.

Output is efficiency times time.  Being good at what you do and doing it a lot – plus of course luck and uncontrollable accidents of birth – are what make for real-world success.  The amount of extra output that domain-general “rationality” will get you, in any particular domain, is dwarfed by the amount of output that you can get from good work habits and domain-specific competence.  I can’t really argue for that, I can’t cite any studies or anything and it’d be misleading to pretend I could.  But it’s the strong impression I’ve gotten from the anecdotal evidence that comprises my life.

For some reason, the fact that I’m giving over mental space and blog space and time in my finite lifespan to (thinking/talking about) this weird candy guy is rankling me in a way that, say, giving those things to Eliezer Yudkowsky never did

nostalgebraist:

Friendly AI Theory is like Aspect Inversion Theory, except about real life

This is probably the best joke I’ve ever made

I CAN’T ESCAAAAAAAAPE
(from Unapologetic by Francis Spufford, a book by a Christian intending to give an honest account of Christian emotional experience, which I’m reading because Andrew Rilstone recommended it)

I CAN’T ESCAAAAAAAAPE

(from Unapologetic by Francis Spufford, a book by a Christian intending to give an honest account of Christian emotional experience, which I’m reading because Andrew Rilstone recommended it)

what is bayesianism? we (i) just don’t know

EDIT, 6/9/2017:

This three-year-old post seems to get a fair amount of traffic relative to everything else I’ve written on this subject, probably because it is the closest thing I have to a single “position on Bayes” post.  It was written when I was much newer to these subjects than I am now; I am still skeptical of Bayesianism, but with a somewhat different emphasis.  (I am much less concerned now with whether conditionalization is the right diachronic rule, and much more concerned with the lack of good Bayesian machinery to deal with the introduction of new hypotheses and with logical non-omniscience generally.)

I should probably write an up-to-date “position on Bayes” post one of these days, and when I do I will link it here.  (ETA: I have written one, it is here.  Read it instead.)  In lieu of that, you can find more up-to-date arguments scattered throughout my Bayes tag.


OK, apparently part of what I am going to do with this sick day, while the caffeine is still convincing me I’m not really sick, is to write this post, which I’ve had in my head for a long time but still haven’t written down.

tl;dr: after a bit of personal narrative this will turn into “reasons I’m not a Bayesian” or “reasons I don’t understand why other people are Bayesians, although maybe they have good reasons and I just haven’t heard about them yet.”

Maybe 6 years ago or so, I learned the word “Bayesian.”  This word seemed to refer to a particular philosophical position that was believed by a lot of smart people (note: me from six years ago was much more keen than current me on investing great weight in concepts like “smart people”).  Some of these people were bloggers I was reading at the time, but some of them were academics.  I knew that there were rival positions, like something called “frequentism,” but all the smart people seemed to be Bayesians.  I wanted to be a Bayesian too, but I told myself that first I should probably figure out what Bayesianism was.

I only properly tried to do this a number of years later.  I had read some popular resources about Bayesianism, but they weren’t very satisfying, so I checked out John Earman’s academic book “Bayes or Bust?” from the library and started reading it.  I didn’t get very far.  Partially this was because I was trying it to read it during my first semester of grad school while taking a full courseload and studying for imminent quals.  But partly it was because Earman’s book was full of numerous exceedingly complicated and subtle arguments both in favor of and against Bayesianism.  The amount of heavy shit – both mathematical and philosophical – I’d have to think through before reaching a position on Bayesianism was very intimidating.

But if this was the state of affairs, why were there so many Bayesians?  Had they passed through these trials by fire unscathed?  Did they have lower philosophical standards than the ones that I, perhaps quixotically, was trying to maintain?  Was there a middle road between the pop presentations, like Eliezer Yudkowksy’s – which weren’t nearly serious enough for me – and the presentations like Earman’s, which were so serious they scared me off?  And again: if this was all so hard to make sense of, whence all these Bayesians I kept meeting?

I still don’t know the answer to any of these questions.  Below, I’m going to try to talk a little bit about what Bayesianism appears to be, to me, and why it doesn’t seem to be intuitive (according to the arguments in its favor I think I actually understand, which is not all of them).

Keep reading