Randall Munroe’s finally discovered the world’s greatest vein of nerd comedy material and he still can’t make a good joke

Randall Munroe’s finally discovered the world’s greatest vein of nerd comedy material and he still can’t make a good joke
like have you ever seen “lady in the water”. you know how the character has a bizarre mystical dilemma and he tries to solve it by asking the movie critic for help about identifying who would be the relevant characters were this a movie? and the movie critic names some characters and the guy just…
So a lot of people have asked me to take a look at the Yudkowsky writing guide, and I will eventually (first I have to finish HPMOR ,which is taking forever because I’m incredibly bored with it, but I HAVE MADE A COMMITMENT- hopefully more HPMOR live blogging after Thanksgiving).
But I did hit…
This is also why The Magicians was so bad. (One of the reasons, anyway.)
Hey nostalgebraist, I have a question. (Asking here and not in the ask box because (1) my followers might be interested, and (2) it’s long. Hope you don’t mind, and obviously, feel free not to answer!)
I’m a grad student in… well, I don’t know what field I’m in, but let’s say computational…
I don’t think any of this is especially objectionable. The problem is identifying the “intelligence” (or other desired quality of any concept formation method) specifically with the Bayesianness, rather than with the whole thing, including the (possibly complicated and/or clever) methods of prior construction. You can be a perfectly consistent Bayesian who still reasons poorly because of bad prior or inefficiency, and likewise you can be a very good reasoner while still not being Bayesian (heuristics are there for a reason, after all).
This makes sense, and I think I agree with you, though I’ve never wondered where the “intelligence” or “rationality” is in a system. But you’re right; implementing Bayesian reasoning is not a necessary or sufficient condition for intelligence.
I guess this is somewhat complicated by the issue of Marr’s three levels. Marr’s three levels are… when you are building a model of the mind, it can be at various different levels of description. Like, you can describe the mind at the neural level, specifying where the neurons are and what part of the brain is active when processing this particular thing. Or you can describe the mind at what Marr calls the “computational level” (though I find that term misleading), where you explain what the goals the system is trying to accomplish (that is, what computation it’s trying to perform). Or you can be somewhere in the middle, and describe how the human mind actually accomplishes those goals (using heuristics and so on), but you don’t need to go all the way down to the neural level.
Bayesian models are supposed to be operating on the computational level, explaining what goals the system has, and how it might behave if there were no resource constraints/etc. So when researchers build Bayesian models, they are not trying to say “the human mind literally implements this Bayesian model, down to actually storing probabilities”. It just says “this is what the mind is trying to accomplish, and because we know its goals, we can do a better job understanding how those goals are actually implemented using all these complicated heuristics”.
So yeah, if people on LW see Bayesian models and think “it is impossible to reason well unless you literally have probabilities and do the math with them”, they are being super dumb.
[snip]
To clarify a bit about what I’m criticizing LW/Yudkowsky for here: I don’t think they literally believe that you have to have explicit probabilities in your head and explicitly update them, but they do seem to think that “intelligence” or “rationality” (EY rarely distinguishes the two in this context) amounts to being Bayesian, effectively if not explicitly.
Of course this runs into the problem that it provides no way to compare different Bayesian methods, even though one may perform terribly and one may perform excellently (at some task).
Compare, for instance, this early essay on AI design to posts like this and this, in which he rebukes his earlier self for thinking in terms of mere “bags of tricks” rather than the One True Way, the “math of intelligence,” which dictates what a reasoner must do rather than merely providing tricks or devices it can use.
Note the odd disconnect here: the earlier essay is full of arguments (you’d know better than me whether they’re any good) about how an AI should form concepts on the basis of sense inputs. The later posts say that this was all misguided, because it was just playing around with ideas that sounded nice, rather than deriving what must be done from the One True Way. But surely Bayesianism itself doesn’t tell you exactly how to form concepts – this is why there are multiple Bayesian models of concept formation! – and it certainly doesn’t tell you what to specify a prior over, given limited resources and only sense data to update on (arguably this is the purpose of concept formation).
So the effect of Yudkowksy’s “Bayesian enlightenment” has been this regression where instead of talking about actual AI design issues, he talks about “idealized” Bayesian agents like AIXI and about angels-on-the-head-of-a-pin problems like “how would an AIXI-like agent come to have a self-concept?” LW in general seems very unconcerned with the details of actual AI design and this seems to be justified somehow by Bayes, though the justification doesn’t actually make sense, as I explained above.
(via untiltheseashallfreethem)
Hey nostalgebraist, I have a question. (Asking here and not in the ask box because (1) my followers might be interested, and (2) it’s long. Hope you don’t mind, and obviously, feel free not to answer!)
I’m a grad student in… well, I don’t know what field I’m in, but let’s say computational…
I don’t think any of this is especially objectionable. The problem is identifying the “intelligence” (or other desired quality of any concept formation method) specifically with the Bayesianness, rather than with the whole thing, including the (possibly complicated and/or clever) methods of prior construction. You can be a perfectly consistent Bayesian who still reasons poorly because of bad prior or inefficiency, and likewise you can be a very good reasoner while still not being Bayesian (heuristics are there for a reason, after all).
I don’t know much about this specific field but my biased guess would be that much of the conceptual meat is in the methods of prior construction, and that “non-Bayesian” alternatives could probably be reformulated as Bayesian methods with different priors.
It prescribes behavior for creatures without priors by telling them to act like they have a prior, with no proof that this produces good results.
If we’re specifically talking about the way LW uses it in conversations about intelligence and AI — where it is proposed as a design principle for non-human AIs rather than a norm for humans — the problem is different: it encourages side-stepping of any discussion of how intelligence really works by packing all of the workings into prior construction, and then not talking about prior construction. Yudkowsky_2001’s writing is full of detailed discussion of how an AI might form concepts from observations and stuff like that; in the LW years this is all abstractly tossed into a box called “the prior” and then ignored.
(Sometimes we hear of the prior being constructed using Occam’s Razor over all computable hypotheses, essentially the ultimate tradeoff of generality for speed, where the older AI concepts did justice to the idea that one might need to form useful concepts quickly by judicious reductions of the hypothesis space — the human visual system won’t work in environments where light doesn’t work the way it does in the real world, but then, it never needs to. AIXI spends 10 years becoming very certain that the laws of physics are not the laws of Pac-Man, and then promptly gets eaten by a tiger, who never needed to ask the question to begin with.)
Probably not, because I don’t really find it all that interesting. Where it’s wrong, it’s boringly wrong (e.g. misuse of the concept of utility which he later beat himself up about in LW posts), and it isn’t surprisingly good, the way some of his vintage AI stuff is surprisingly good.
bpd-dylan-hall-deactivated20190:
I like some of his ideas, but I think he carries signalling too far. Reading his blog tends to make me feel sad, because aesthetically I’m really the “people are great” sort and he is… really not. He may or may not be the reason lesswrong and neoreaction are next-door neighbors in online socialspace, given that Heartiste is on his blogroll. If so, I wish he wouldn’t have done it, because I think us being neighbors is utility-decreasing for both lesswrongers and neoreactionaries (I mean, judging from the amount of vitriol they direct at our genderweirdness, polyamory, tendency to be friends with each other, etc.).
Agreed. Also find his writing style annoying, so many OB posts are him going “look at this clever hypothesis I thought of” without any real depth and substansiation
I remember this being a very distinctive and annoying quality of Hanson back in the original Overcoming Bias days (I haven’t read him since). He had this strange fixation on brevity — his responses in arguments would often be one or two sentences without a clear line of argument or presentation of evidence, his posts were often unsatisfyingly brief, and he seemed invested in the site policy that comments were supposed to be very short (otherwise they should be posts, not comments — but even his posts were tiny).
There’s a great deal of tension in LW culture between 1) careful, skeptical argument with extensive sourcing, intellectually scrupulous but sometimes verging on uselessness (“I can’t figure out politics because it’s too complicated”), and 2) briefly stated, sweeping claims that suggest, but do not source, some great familiarity with relevant evidence.
SSC on a good day exemplifies #1, and Hanson (along with LW-adjacent reactionaries like nydwracu and Nyan Sandwich) exemplify #2. Obviously, I prefer #1. I find writing in the #2 mode pretty much useless because even if the ideas are interesting, the style makes it very hard to know how to research them further.
as a yudkowsky hipster, I prefer Eliezer_2001
like seriously there’s no way of reading this as reflecting anything but a decline imo
he once understood that “intelligence” was a complicated interacting set of machinery engineered to do a bunch of specific, non-arbitrary things in concert, and now he thinks “intelligence” is being more as opposed to less Bayesian, which in a physically constrained world where you can’t have priors over everything means very little
like for someone who talks so much about IQ you’d wonder exactly how he thinks a typical IQ test measures conformity to Bayesian reasoning (and how do you test Bayesian reasoning when you don’t know the subject’s priors? i mean you probably could in principle but there’s no way to re-interpret any intelligence test as doing that)
as a yudkowsky hipster, I prefer Eliezer_2001