Install Theme

turboshitnerd asked: i nominate you for "let's talk about homestuck part 2" of euro talk simulator aka "let's talk about how homestuck sucks so much now" (but seriously i liked the episode on Henry Darger and itd be cool if you were on again, also i need to watch more of these)

Thanks, but I think (and given your use of “seriously” I get the sense you agree) that my opinions about how Homestuck sucks now are so well-represented in text that it would be kind of redundant to bring me on to voice them again

Maybe I can be on trucks again once I get a new obsession — alas, the current ones have been working for me for quite a while

snarp replied to your post: If I got hit by a meteor right now, w…

I’m sleep-deprived and was just reading Please Save My Earth, and this post confused me badly for several seconds. ‘Yudkowsky that is not how the Moon Dreams ended I don’t think you’re REALLY the alien priestess Mokuren.’

To be fair, Yudkowsky really being the alien priestess Mokuren would not be significantly less absurd than many of his actual qualities

(N.B. I have no idea who the alien priestess Mokuren is)

eudaemaniacal replied to your post: If I got hit by a meteor right now, w…

tasked with recognizing another yud

you’re a yud, harry

If I got hit by a meteor right now, what would happen is that Michael Vassar would take over responsibility for seeing the planet through to safety, and say “Yeah I’m personally just going to get this done, not going to rely on anyone else to do it for me, this is my problem, I have to handle it." And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them.

Here’s your morning Yudquote (said in 2010).  Thank you, thank you, this is what I’m here for

A note for su3su2u1 – since you’ve been engaging with Bayesians recently, I just wanted to ask if you’ve looked through my tag “you just gotta place your faith in a theorem called bayes”?  Maybe you were around for some of this, I can’t remember, but if you go back in that tag there are some long conversations between me and several LW-aligned Bayesians that shed some light on what they believe.

In particular, I got the sense that Dutch Books are not a big deal for LWers.  I think they are much more interested in justifying Bayes using Cox’s Theorem (which requires that you find the Cox Axioms intuitive; there’s also the synchronic vs. diachronic issue which we talked about ad nauseam a while ago, all in the tag).

The overall sense I got is that LW Bayesians are very into Jaynes, and come to their views either by getting them directly from Jaynes, or building off of Jaynes when he doesn’t directly address an issue.  (Sure, you could learn that from reading LW and seeing how Yudkowsky rhapsodizes about Jaynes, but it may be worth seeing how it looks in a dialogue with a skeptic, i.e. me.)

aprilwitching-deactivated201808 asked: "old-fashioned linear time neurons" rob

old-fashioned linear time neurons

incapable of comprehending that there are 4 days in only 1 earth rotation

aprilwitching-deactivated201808 asked: rob nostalgebraist i went back to ur friends liveblog of the mega-long, didactic, terrible harry potter fanfic but i collapsed in giggles over the phrase "old-fashioned linear time neurons" and kinda had to back out for a while. also, i thought his exploration of what might "actually" happen if u transfigured, say, wood into water was rly cool and easy for a layperson to understand; why is the actual fic not like *that* if its meant to be about science? idk hard sci stuff but like, come on.

(really late response since I somehow missed this message the first time around)

Yeah it’s weird – I think the fic itself is less supposed to be about science than “rationality” in the sense of making good decisions.  A lot of the fic (or what I read of it) centers around involved discussions of various character’s decisions/ideas and whether they’re right or wrong, none of which has much to do with science per se.

(That isn’t to say it’s done well – the main character’s version of “rationality” is flawed, in part intentionally, but the reader is left to decide which parts of it are flawed and which aren’t, which somewhat defeats the purpose.  It strangely manages to be morally heavy-handed while not being clear about what it is trying to teach.  At one point Voldemort [!] orders some people to beat up Harry to “teach him how to lose,” and Harry is just like “yeah that was a good idea, I needed to learn how to lose” [!!], and Yudkowksy had to make author’s notes explaining that Voldemort is not being held up as a moral exemplar.  But, like, is Harry’s response supposed to be the right one?  What is being taught by that episode?

As the fic goes on, Harry and Voldemort build a friendship and there is this sense that they get along because they are more rational than the other characters, yet they make strange decisions like that?  And the author kept having to write author’s notes saying “Voldemort is not meant to be emulated” while still portraying him as one of the more rational characters in a story about “the methods of rationality”? ???)

(When I say “Voldemort” I mean “Professor Quirrell” but he’s clearly supposed to be Voldemort and the reader is supposed to know this)

nostalgebraist.tumblr.com →

taymonbeal:

nostalgebraist:

veronicastraszh:

youzicha:

nostalgebraist:

There is another problem, which is that AIXI has to learn everything from scratch. Its early decisions, before it has learned how physics works, will be inferior to those made by something that came with a built-in intuitive physics. Of course the advantage of…

Complexity Theory includes the notion of an “oracle”, which is a set of subroutines you attach that give some capability “for free”. Common examples are oracles for real numbers or oracles that instantly solve 3-SAT. An oracle for “human” isn’t much of a stretch, at least not conceptually.

Yes, but complexity theory makes a distinction between “Turing machine” and “Turing machine with oracle” (they have different capabilities), so we can’t just assume that oracles are allowed any time we’re told we have a Turing machine.

I guess I thought the Turing machines in AIXI weren’t allowed to have oracles, and youzicha thinks they are allowed to have arbitrary oracles?  (But the latter case brings up the problems I mentioned in my most recent post.)

Solomonoff induction ranks programs based on their Kolmogorov complexity, not their time complexity. So the concept of an oracle isn’t needed here.

In this context, the thing that returns the results of a “human” calculation shouldn’t be thought of as a magical oracle, but rather as a subroutine. If you’ve got a program that runs on a machine with a one-step “human” instruction, and you want to translate it to a machine that doesn’t have such an instruction, all you have to do is write a subroutine that simulates the “human” instruction, and add its definition to the program. This will make your program longer, but only by a constant—a very large constant in the case of a “human” subroutine, but nonetheless a constant.

See also Wikipedia’s explanation of why we generally don’t worry too much about the choice of description language (or equivalently, of Turing machine architecture) when dealing with Kolmogorov complexity.

I guess I’m confused by your use of the word “constant” here?

On the machine without the instruction, the program would be very large, much larger than the simplest programs the machine can run.

On the machine with the program, it could be small – a program that does little or nothing but call the instruction would be comparable in size to the smallest programs this machine can run.

So this seems like a case where choice of Turing machine in fact does matter – either the program is short or it isn’t.  Does the Solomonoff prior assign it a high probability (being one of the shortest programs under consideration), or a lower one?

The answer to the above question shouldn’t depend too much on the choice of Turing machine – that’s the idea behind K-Complexity.  Since it does in this case, we seem to have done something wrong.

(via taymonbeal)

nostalgebraist.tumblr.com →

veronicastraszh:

youzicha:

nostalgebraist:

There is another problem, which is that AIXI has to learn everything from scratch. Its early decisions, before it has learned how physics works, will be inferior to those made by something that came with a built-in intuitive physics. Of course the advantage of…

Complexity Theory includes the notion of an “oracle”, which is a set of subroutines you attach that give some capability “for free”. Common examples are oracles for real numbers or oracles that instantly solve 3-SAT. An oracle for “human” isn’t much of a stretch, at least not conceptually.

Yes, but complexity theory makes a distinction between “Turing machine” and “Turing machine with oracle” (they have different capabilities), so we can’t just assume that oracles are allowed any time we’re told we have a Turing machine.

I guess I thought the Turing machines in AIXI weren’t allowed to have oracles, and youzicha thinks they are allowed to have arbitrary oracles?  (But the latter case brings up the problems I mentioned in my most recent post.)

(via starlightvero)

youzicha:

nostalgebraist:

There is another problem, which is that AIXI has to learn everything from scratch.  Its early decisions, before it has learned how physics works, will be inferior to those made by something that came with a built-in intuitive physics.  Of course the advantage of AIXI is that it doesn’t rule anything out: given evidence for quantum mechanics, it can switch its belief to quantum mechanics without having to overcome pre-quantum intuitions.  But it’s not clear that this is any better than a system which starts with good intuitive heuristics, and can build itself new heuristics when they are needed and switch heuristics as needed.  (E.g. something that starts with human-like physical intuition, develops quantum mechanics, and then self-modifies to have a package of quantum intuitions which it can apply in appropriate cases.)  In short, pre-packaged info about one’s environment will help one early on, and need not hamper one forever if it turns out to be subtly flawed.

Well, all Kolmogorov complexity related stuff only makes sense in the limit of more and more data, so I guess it’s not too surprising that AIXI also only is an ideal in the limit?

You have the choice of which universal turing machine to use when defining Kolmogorov complexity, so if you are worried about AIXI performing poorly during the “boot up” phase, maybe you can pick a machine which has a human brain on the side, and allows subroutine calls to, e.g., the human visual system as one-step operations? Basically, I think what AIXI is trying to formalize is an “optimal learner” (and decision maker). Faulting it for not being an “optimal knower” seems odd, because the starting knowledge base is kept abstract in the definition (by the choice of turing machine).

That’s interesting.  I hadn’t really realized that that kind of thing counts as a Turing machine.  I guess I need to think/learn more about Turing machines.

However, if what you say is true, then I’m confused about what work is being done by assuming a Solomonoff prior.  The usual justification is something about Occam’s Razor, but if we can do arbitrarily complicated things in one step (albeit a finite number of such things), the link to Occam’s Razor is broken and I’m not sure what steps in to replace it.

In particular, couldn’t you cook up AIXI to have a fairly general prior this way?  I’m not clear on all the Turing stuff, but if you can get the machine to send arbitrary input to a human visual system, couldn’t you also define a function that asks the machine to read out its program as input, then looks up the program in a table and runs some arbitrary program assigned to it by the table, so you get a mapping from AIXI’s strings (and their Solomonoff probabilities) to an arbitrary set of programs?  (The difficulty would then be the relationship between this function’s output and the output of the whole program; perhaps the machine could be set up so that all programs return the output of the look-up function?)

It is not clear to me that every possible prior could be obtained using this trick (if the trick works at all).  But then the difference between “Solomonoff prior” and “arbitrary prior” is merely that the former rules out priors not obtainable by the trick.  I’m not sure what this set consists of, or whether it should be ruled out; in any case we seem to be quite far from Occam now, and in need of a new justification.

(If AIXI is supposed to have abstract starting knowledge, why not just leave the initial prior unspecified?  What is ruled out by the Solomonoff prior, and why should it be ruled out?)

(via youzicha)