@argumate‘s taken up the the MIRI critic role for the time being so there’s a lot of that kind of stuff crossing my dash
Might as well repeat my own usual objection to “recursive self-improvement” out there (I think I got part of this from su3, but I’m not sure):
We have exactly one example of recursively self-improving intelligence. It’s humans developing technology. Developing the printing press made it easier to develop new things. Physics let us build computers which can instantly do complex calculations that would take earlier physicists hours or years or centuries. Etc.
But this is pretty crude stuff, where we’re just building certain sorts of supplements to our brains that we’re able to figure out how to make. To be more analogous to what the AI is supposed to be doing, we’d have to be re-wiring our own brains or the like.
But we don’t do that, not only because it’s physically infeasible right now, but because we don’t understand our brains well enough to suggest improvements. We know a lot of stuff about how the brain is set up, but we don’t understand how it works well enough to look at parts of it and say “you know, this could be done better.”
Arguably a machine intelligence would be better at this sort of thing because it is made of “code” and presumably it is good at “coding” along with everything else. But this is like saying “our brains are made of cells, and we can do a lot of things to cells, so … ” The problem is that the design, rather than just the substrate, of a thing as intelligent as a human is so complicated it’s hard for a human to understand.
It is conceivable that a human-level machine intelligence might not face this problem. But lots of things are conceivable, and futurism is notoriously difficult. This is a situation where we have only one example to extrapolate from, and the one example didn’t work out like the “intelligence explosion” scenario.
(If we could code a human-level machine intelligence from scratch, we’d have to understand human-level intelligence well enough to do so, which would constitute a proof that humans can understand human-level intelligence. But that’s precisely what we don’t have any particular reason to believe.)
mugasofer liked this
znk liked this
almostcoralchaos reblogged this from slatestarscratchpad
blashimov liked this
theungrumpablegrinch liked this
chroniclesofrettek reblogged this from nostalgebraist and added:
I disagree that a very fast intelligence explosion is a pascal’s mugging. It’s saying “we don’t know” and having very...
matchazed reblogged this from nostalgebraist
typicalacademic liked this
nostalgebraist reblogged this from alexanderrm and added:
Yes, it’s exponential because technology makes it easier to make more technology. And yeah, I agree that it’s weird to...
raginrayguns reblogged this from nostalgebraist and added:
re: maybe got part of it from su3, maybe you got part of it from me?not that i’d be surprised if we came to it...
alexanderrm reblogged this from nostalgebraist and added:
“and the one example didn’t work out like the ‘intelligence explosion’ scenario”. But it *did* work out like that, or...
vulpineangel liked this
ununnilium liked this
malpollyon liked this
slagoofaboo liked this lostpuntinentofalantis reblogged this from nostalgebraist and added:
So, I’m not sure what the norms on tumblr are and uh, feel free to ignore me as a rando and I’ll tell you that I won’t...
centralkvetchmonolith liked this
somnilogical liked this
antisquark reblogged this from nostalgebraist and added:
I feel this argument is somewhat missing the point. Recursive self-improvement is a plausible narrative but not much...
baconmancr liked this
serpent-moon liked this
segfaultvicta liked this
nostalgebraist liked this
osberend reblogged this from slatestarscratchpad and added: Not quite. If that’s all true and the value of the things it can figure out in each generation scales suitably so that...
snarp liked this
automatic-ally reblogged this from slatestarscratchpad and added:
It’s still possible to ‘cheat’ to AI without understanding intelligence much, either by brain-emulation (or Weird Stats...
femmenietzsche liked this theungrumpablegrinch reblogged this from nostalgebraist and added:
You are assuming that every possible intelligence architecture is as difficult to understand as the human brain. This...
eternalgaylord liked this
slatestarscratchpad reblogged this from nostalgebraist and added:
I understand what you mean about brains being really hard to understand, but I still don’t get why you’re saying this is...
rafaelvarasmartinez liked this
wirehead-wannabe liked this
jaiwithanadult liked this
moral-autism liked this
bulbous-oar liked this argumate liked this
ambiguations liked this
