Install Theme

Talk about “AI” in the press these days tends to conflate two things:

  1. Machine intelligence in general, a category that includes e.g. hypothetical super-intelligent machines, hypothetical machines based on technologies not yet invented, and robots from science fiction
  2. A specific bundle of technologies which has gotten a lot of hype and investment in the past 5-10 years

#1 is the subject of a rich vein of discussion and speculation going back decades.  Turing, Asimov, et. al. did just fine speculating about AI without needing to know about the thing that industry hype currently calls “AI.”

You don’t need to know what a “convolutional neural network” is to worry about what would happen if a machine were smarter than you.

Because I work on #2 professionally, I get a lot of spam emails and targeted ads that say things like “Accelerate your AI development cycle with ProductName” or “ProductName: scalable AI solutions.”

The word “AI” in these ads has a recognizable meaning, and it is not the same meaning used in the sentence “Elon Musk founded OpenAI because he was worried AI might cause human extinction.”

—-

Because the press conflates these things, the average person tends to do so, too.  It’s common even among people who make consequential decisions involving “AI,” like politicians, executives, and economists.

I usually feel like I’m being pedantic about this, but I’m starting to think it’s a real problem.

The conflation encourages people to imagine #2 preceding #1 in time, as though “AI” were some specific thing discovered in a research lab in like 2002, whose properties were later extrapolated to scary hypotheticals.

It’s understandable that AI research companies would make this conflation.  It’s a great marketing trick, if you can pull it off, to convince the public that when they encounter speculation about arbitrarily powerful future technologies, it’s really about the concrete thing your company does right now.

It might be okay for the public to believe this story willingly, but they seem to have bought it (via the press) without realizing what was happening.  They don’t know they’re letting someone else draw the lines inside their heads.  They may learn all sorts of facts about the region in their mental map labeled “AI,” but they don’t attach them to the right nouns, even if each fact is true of some noun.

  1. depreciated-dragon reblogged this from time-and-water
  2. the-timewatcher reblogged this from nostalgebraist
  3. facebook-reality reblogged this from nostalgebraist
  4. latent-semantics reblogged this from voxette-vk
  5. pacmastermeow reblogged this from nostalgebraist
  6. best-friend-quads reblogged this from nostalgebraist and added:
    Wow, this is something I’ve been thinking for a while. And it’s an intentional conflation that’s happened because people...
  7. alexanderrm reblogged this from nostalgebraist
  8. eikotheblue reblogged this from argumate
  9. nathanielbuildsatesseract reblogged this from shacklesburst
  10. maksalaatikosto reblogged this from nostalgebraist
  11. nostalgebraist said: @eightyonekilograms there’s one specific person (dan luu) who reads my blog and links it on HN a lot. whenever i’m on the HN frontpage it’s always one of his links
  12. eightyonekilograms said: Damn this made it to hacker news
  13. catgirldragon reblogged this from nostalgebraist
  14. time-and-water reblogged this from nostalgebraist
  15. lexisjourney reblogged this from nostalgebraist
  16. scienceexclamationmark reblogged this from argumate
  17. nostalgebraist posted this