@argumate‘s taken up the the MIRI critic role for the time being so there’s a lot of that kind of stuff crossing my dash
Might as well repeat my own usual objection to “recursive self-improvement” out there (I think I got part of this from su3, but I’m not sure):
We have exactly one example of recursively self-improving intelligence. It’s humans developing technology. Developing the printing press made it easier to develop new things. Physics let us build computers which can instantly do complex calculations that would take earlier physicists hours or years or centuries. Etc.
But this is pretty crude stuff, where we’re just building certain sorts of supplements to our brains that we’re able to figure out how to make. To be more analogous to what the AI is supposed to be doing, we’d have to be re-wiring our own brains or the like.
But we don’t do that, not only because it’s physically infeasible right now, but because we don’t understand our brains well enough to suggest improvements. We know a lot of stuff about how the brain is set up, but we don’t understand how it works well enough to look at parts of it and say “you know, this could be done better.”
Arguably a machine intelligence would be better at this sort of thing because it is made of “code” and presumably it is good at “coding” along with everything else. But this is like saying “our brains are made of cells, and we can do a lot of things to cells, so … ” The problem is that the design, rather than just the substrate, of a thing as intelligent as a human is so complicated it’s hard for a human to understand.
It is conceivable that a human-level machine intelligence might not face this problem. But lots of things are conceivable, and futurism is notoriously difficult. This is a situation where we have only one example to extrapolate from, and the one example didn’t work out like the “intelligence explosion” scenario.
(If we could code a human-level machine intelligence from scratch, we’d have to understand human-level intelligence well enough to do so, which would constitute a proof that humans can understand human-level intelligence. But that’s precisely what we don’t have any particular reason to believe.)
I’m confused. If we have AI, doesn’t that mean we’ve coded a human-level machine intelligence from scratch, and therefore we understand human-level intelligence? And unless we’re keeping our understanding really secret, doesn’t that mean the machine also understands human-level intelligence?
The exception I can see is if we evolve an AI through something like genetic algorithms, or if we throw together some neural nets and with enough training everything magically works out. But see Part III here.
That was what I tried to cover in the last paragraph. The post is about the possibility that, in general, a (nontrivially) intelligent being isn’t smart enough to understand its own design. If this is true, yes, (1) we won’t be able to code a human-level AI from scratch, and also (2) if we create a human-level AI some other way, it won’t be able to do a “hard takeoff.”
The picture you give in Part III of that post seems different from the “intelligence explosion” as I’ve seen it presented. It’s conceivable that human intelligence was just a matter of “take an ape brain and add more neurons” and a superintelligence will just be an ape brain with yet more neurons. But in that case, it isn’t recursive self-improvement, it’s just “I don’t know how I work, but I know I get smarter if you give me more neurons, so give me more neurons.”
The whole idea of an intelligence explosion, as I understand it, involves a being designed to modify itself which is capable of coming up with better and better improvements to itself. The “ape with bigger brain” doesn’t do this – it just knows that’d it’d be smarter with more neurons, which we know as well as it does. And there wouldn’t be any incentive to let it self-modify – it’s not like we think it will notice it has too few neurons when we wouldn’t.
(I could easily train one of my machine learning algorithms on its own performance and get it to notice that it does better when I give it more memory or CPU time, and even let it demand more of these things. This might crash my computer, but isn’t exactly scary.)
(via slatestarscratchpad)
mugasofer liked this
znk liked this
almostcoralchaos reblogged this from slatestarscratchpad
blashimov liked this
theungrumpablegrinch liked this
chroniclesofrettek reblogged this from nostalgebraist and added:
I disagree that a very fast intelligence explosion is a pascal’s mugging. It’s saying “we don’t know” and having very...
matchazed reblogged this from nostalgebraist
typicalacademic liked this
nostalgebraist reblogged this from alexanderrm and added:
Yes, it’s exponential because technology makes it easier to make more technology. And yeah, I agree that it’s weird to...
raginrayguns reblogged this from nostalgebraist and added:
re: maybe got part of it from su3, maybe you got part of it from me?not that i’d be surprised if we came to it...
alexanderrm reblogged this from nostalgebraist and added:
“and the one example didn’t work out like the ‘intelligence explosion’ scenario”. But it *did* work out like that, or...
vulpineangel liked this
ununnilium liked this
malpollyon liked this
slagoofaboo liked this lostpuntinentofalantis reblogged this from nostalgebraist and added:
So, I’m not sure what the norms on tumblr are and uh, feel free to ignore me as a rando and I’ll tell you that I won’t...
centralkvetchmonolith liked this
somnilogical liked this
antisquark reblogged this from nostalgebraist and added:
I feel this argument is somewhat missing the point. Recursive self-improvement is a plausible narrative but not much...
baconmancr liked this
serpent-moon liked this
segfaultvicta liked this
nostalgebraist liked this
osberend reblogged this from slatestarscratchpad and added: Not quite. If that’s all true and the value of the things it can figure out in each generation scales suitably so that...
snarp liked this
automatic-ally reblogged this from slatestarscratchpad and added:
It’s still possible to ‘cheat’ to AI without understanding intelligence much, either by brain-emulation (or Weird Stats...
femmenietzsche liked this theungrumpablegrinch reblogged this from nostalgebraist and added:
You are assuming that every possible intelligence architecture is as difficult to understand as the human brain. This...
eternalgaylord liked this
slatestarscratchpad reblogged this from nostalgebraist and added:
I understand what you mean about brains being really hard to understand, but I still don’t get why you’re saying this is...
rafaelvarasmartinez liked this
wirehead-wannabe liked this
jaiwithanadult liked this
moral-autism liked this
bulbous-oar liked this argumate liked this
ambiguations liked this
