Also, at one point Bostrom uncritically describes AIXI (or something close to it) as an ideal reasoner, which is disappointing
I’m not talking about the computability issue here: even if AIXI is uncomputable (even if it were just slow), one could still say “it gets the right answer, and I want to get close to that answer computably and quickly,” just as a numerical analyst might say that they want their fast iterative method to closely approximate the answer given by a slow direct method
There is another problem, which is that AIXI has to learn everything from scratch. Its early decisions, before it has learned how physics works, will be inferior to those made by something that came with a built-in intuitive physics. Of course the advantage of AIXI is that it doesn’t rule anything out: given evidence for quantum mechanics, it can switch its belief to quantum mechanics without having to overcome pre-quantum intuitions. But it’s not clear that this is any better than a system which starts with good intuitive heuristics, and can build itself new heuristics when they are needed and switch heuristics as needed. (E.g. something that starts with human-like physical intuition, develops quantum mechanics, and then self-modifies to have a package of quantum intuitions which it can apply in appropriate cases.) In short, pre-packaged info about one’s environment will help one early on, and need not hamper one forever if it turns out to be subtly flawed.
Why can’t you feed whatever workable version of AIXI you have all your intuitive physics? You’re only giving that data to the designers of whatever program you favor, but not to AIXI itself.
Part of the specification of AIXI (as I understand it) is that it uses the Solomonoff prior. A prior putting (say) extra weight on Newtonian explanations would not be the Solomonoff prior, so it wouldn’t be AIXI.
Ok, but the only reason we have intuitions about Newtonian physics is because Newton was mostly right. In a world with different physics, we’d probably have intuitions more similar to the real physics than to Newtonian. And that’s because we (through thousands of years of natural selection, through hundreds of years of science and experiments) have accumulated a freaking huge amount of data. If AIXI is started out with that data, it would switch its posterior to one favoring Newtonian explanations quite soon, before ever being given a real problem. As to how to get that info into AIXI, just feed it a dump of the internet or something. You’re comparing apples and oranges by looking at a “naked” AIXI, but a human-prior program with lots of data. Start a human-prior out with wiped intuitions, or AIXI with the same data that led the humans to believe something, and AIXI will win hands down.
It seems we were thinking along the same lines – while you were writing this reply, I wrote a post about AIXI’s ability to learn from data, which retroactively works as a reply to your reply.
(via thinkingornot)

