Install Theme

rubegoldbergsaciddreams:

nostalgebraist:

Anyway, while looking over the twitter feed of that “predictive text” guy I found this, which I had never seen before (although @deusvulture says it’s famous)

Recurrent neural networks: what the fuck

(What’s especially impressive is that the parameters were fit using an individual piece of consumer hardware.  High-end consumer hardware, but still.  This isn’t just throwing giant clusters at the problem)

if you haven’t, check out talcos’s MTG card generation using RNNs (Can give link if you want). It’s where i learned about how hype RNNs are.

I have, and they’re hilarious, especially the “meaningful, novel, but profoundly useless cards”:

* When $THIS enters the battlefield, each creature you control loses trample until end of turn.
* Whenever another creature enters the battlefield, you may tap two untapped Mountains you control.
* 3, : Add 2 to your mana pool.
* Legendary creatures can’t attack unless its controller pays 2 for each Zombie you control.

But it didn’t bowl me over in the way the article linked in the OP did, I imagine because generating M:TG cards is an unusully hard task.  The game has a fairly strict syntax on top of English grammar, and also has a very large card base which already includes most card ideas beneath a certain complexity level (excepting “profoundly useless cards”), so the RNN has three separate problems to solve: “make cards that obey English grammar,” “make cards that obey M:TG syntax,” and “make cards that don’t strike an M:TG player as boring or pointless.”

By contrast, the Shakespeare generation in the OP article is an unusually easy task.  Most of us don’t know much about Early Modern English syntax, and thus are used to Shakespeare sounding grammatically weird in arbitrary ways, so it’s hard to distinguish RNN grammar mistakes from authentic Shakespearean grammar.  Also, Shakespeare is such a prestigious author that imitating him seems especially impressive for any given level of imitation quality.

(Of course I am making these judgments of difficulty in hindsight, with knowledge of how impressive the RNN output actually was, so they should be taken with a grain of salt.)

(via eikotheblue)

  1. ceruleanvulpine reblogged this from nostalgebraist
  2. sufficientlylargen reblogged this from eikotheblue
  3. vincentmunsmusings reblogged this from wolffyluna
  4. wolffyluna reblogged this from uncrediblehallq
  5. grahamejoyce-blog reblogged this from nostalgebraist
  6. nostalgebraist reblogged this from lithnin and added:
    Yeah, now that you say that it seems right to me – I think I was wrong above.I think there are two effects here pushing...
  7. lithnin reblogged this from nostalgebraist and added:
    It’s funny, when thinking about the same underlying facts I came to the exact opposite conclusions. My expectation is...
  8. adzolotl reblogged this from multiheaded1793
  9. cofinaldestination reblogged this from nostalgebraist
  10. fipindustries reblogged this from nostalgebraist
  11. phenoct reblogged this from nostalgebraist
  12. uncrediblehallq reblogged this from nostalgebraist and added:
    If you look at the Robo Rosewater Twitter account, the cards based on longer training / better input usually end up...
  13. multiheaded1793 reblogged this from multiheaded1793
  14. szhmidty reblogged this from nostalgebraist and added:
    Hold up, depending on the cost that last one seems potentially useful for a black deck. Your opponent summon a legendary...