Anyway, while looking over the twitter feed of that “predictive text” guy I found this, which I had never seen before (although @deusvulture says it’s famous)
Recurrent neural networks: what the fuck
(What’s especially impressive is that the parameters were fit using an individual piece of consumer hardware. High-end consumer hardware, but still. This isn’t just throwing giant clusters at the problem)
if you haven’t, check out talcos’s MTG card generation using RNNs (Can give link if you want). It’s where i learned about how hype RNNs are.
I have, and they’re hilarious, especially the “meaningful, novel, but profoundly useless cards”:
* When $THIS enters the battlefield, each creature you control loses trample until end of turn.
* Whenever another creature enters the battlefield, you may tap two untapped Mountains you control.
* 3, : Add 2 to your mana pool.
* Legendary creatures can’t attack unless its controller pays 2 for each Zombie you control.
But it didn’t bowl me over in the way the article linked in the OP did, I imagine because generating M:TG cards is an unusully hard task. The game has a fairly strict syntax on top of English grammar, and also has a very large card base which already includes most card ideas beneath a certain complexity level (excepting “profoundly useless cards”), so the RNN has three separate problems to solve: “make cards that obey English grammar,” “make cards that obey M:TG syntax,” and “make cards that don’t strike an M:TG player as boring or pointless.”
By contrast, the Shakespeare generation in the OP article is an unusually easy task. Most of us don’t know much about Early Modern English syntax, and thus are used to Shakespeare sounding grammatically weird in arbitrary ways, so it’s hard to distinguish RNN grammar mistakes from authentic Shakespearean grammar. Also, Shakespeare is such a prestigious author that imitating him seems especially impressive for any given level of imitation quality.
(Of course I am making these judgments of difficulty in hindsight, with knowledge of how impressive the RNN output actually was, so they should be taken with a grain of salt.)
(via eikotheblue)
palfkcaj liked this
dropdeadadorable liked this
ceruleanvulpine reblogged this from nostalgebraist
sufficientlylargen reblogged this from eikotheblue
vincentmunsmusings reblogged this from wolffyluna
moogleshopkeeper liked this pratfins liked this
wolffyluna reblogged this from uncrediblehallq rock-cake-with-a-pin-in-it liked this
phenoct liked this
vexingsibilant liked this
grahamejoyce-blog reblogged this from nostalgebraist
nostalgebraist reblogged this from lithnin and added:
Yeah, now that you say that it seems right to me – I think I was wrong above.I think there are two effects here pushing...
nostalgebraist liked this
lithnin reblogged this from nostalgebraist and added:
It’s funny, when thinking about the same underlying facts I came to the exact opposite conclusions. My expectation is...
thetransintransgenic liked this
serpent-moon liked this
daniel1112222-blog liked this
91625 liked this
adzolotl reblogged this from multiheaded1793
adzolotl liked this
cofinaldestination reblogged this from nostalgebraist
cofinaldestination liked this
collapsethewavefunction liked this
dagny-hashtaggart liked this
shitifindon liked this
fipindustries reblogged this from nostalgebraist sufficientlylargen liked this
raginrayguns liked this
blashimov liked this
phenoct reblogged this from nostalgebraist
urpriest liked this
hooniversalist liked this
uncrediblehallq reblogged this from nostalgebraist and added:
If you look at the Robo Rosewater Twitter account, the cards based on longer training / better input usually end up...
wirehead-wannabe liked this
multiheaded1793 reblogged this from multiheaded1793
formerbishie liked this
szhmidty reblogged this from nostalgebraist and added:
Hold up, depending on the cost that last one seems potentially useful for a black deck. Your opponent summon a legendary...
multiheaded1793 liked this
house-carpenter liked this
- Show more notes
