So of course having read that RNNs article last night and having also remembered that @uploadedyudkowsky exists I simply had to try generating material for @uploadedyudkowsky using char-rnn rather than a Markov chain
So far recurrent!Yudkowsky is still young:
something behaving, it were muman"?… sometimes any experience from fasting to get observe or this. If I know that the subjects downdy metait incommentation on the fultion. They Trace to be tosefare on the keyich very bit of humanf on your betwowulding metaption. If it else will have place who no matter of unyer overree, problems: But now trying than you even of the theor measuring, many lookness. no, or overrood to spet in the could. That people’s the physics. (And their mettod of my unceasion, why were jult questions, with switting of uncludnent from guess,“ this befine witcoun it a disingeed with green - tonateing has persons.
But it is iterating as we speak.
That looks pretty good already! Are you just running it on your laptop/desktop?
Yeah, laptop, and using my CPU too. (My GPU should be able to work with OpenCL, but it failed to install and I didn’t feel like figuring out why.) This is after 1000 iterations on a 2-layer network with 512 hidden nodes, which took about 50 minutes.
So of course having read that RNNs article last night and having also remembered that @uploadedyudkowsky exists I simply had to try generating material for @uploadedyudkowsky using char-rnn rather than a Markov chain
So far recurrent!Yudkowsky is still young:
something behaving, it were muman"?… sometimes any experience from fasting to get observe or this. If I know that the subjects downdy metait incommentation on the fultion. They Trace to be tosefare on the keyich very bit of humanf on your betwowulding metaption. If it else will have place who no matter of unyer overree, problems: But now trying than you even of the theor measuring, many lookness. no, or overrood to spet in the could. That people’s the physics. (And their mettod of my unceasion, why were jult questions, with switting of uncludnent from guess,“ this befine witcoun it a disingeed with green - tonateing has persons.
Anyway, while looking over the twitter feed of that “predictive text” guy I found this, which I had never seen before (although @deusvulture says it’s famous)
Recurrent neural networks: what the fuck
(What’s especially impressive is that the parameters were fit using an individual piece of consumer hardware. High-end consumer hardware, but still. This isn’t just throwing giant clusters at the problem)
if you haven’t, check out talcos’s MTG card generation using RNNs (Can give link if you want). It’s where i learned about how hype RNNs are.
I have, and they’re hilarious, especially the “meaningful, novel, but profoundly useless cards”:
* When $THIS enters the battlefield, each creature you control loses trample until end of turn. * Whenever another creature enters the battlefield, you may tap two untapped Mountains you control. * 3, : Add 2 to your mana pool. * Legendary creatures can’t attack unless its controller pays 2 for each Zombie you control.
But it didn’t bowl me over in the way the article linked in the OP did, I imagine because generating M:TG cards is an unusully hard task. The game has a fairly strict syntax on top of English grammar, and also has a very large card base which already includes most card ideas beneath a certain complexity level (excepting “profoundly useless cards”), so the RNN has three separate problems to solve: “make cards that obey English grammar,” “make cards that obey M:TG syntax,” and “make cards that don’t strike an M:TG player as boring or pointless.”
By contrast, the Shakespeare generation in the OP article is an unusually easy task. Most of us don’t know much about Early Modern English syntax, and thus are used to Shakespeare sounding grammatically weird in arbitrary ways, so it’s hard to distinguish RNN grammar mistakes from authentic Shakespearean grammar. Also, Shakespeare is such a prestigious author that imitating him seems especially impressive for any given level of imitation quality.
(Of course I am making these judgments of difficulty in hindsight, with knowledge of how impressive the RNN output actually was, so they should be taken with a grain of salt.)
I’m just sitting here laughing hysterically at those Magic cards. Mointainspalk and Tromple look like excellent ability keywords.
what’s the context? it looks like an example of image recognition software output, but i wouldn’t be surprised if it was something completely different
Yeah, it’s from an RNN-based project that generates natural language captions for images, by the same guy who wrote that “Unreasonable Effectiveness of RNNs” post.
Anyway, while looking over the twitter feed of that “predictive text” guy I found this, which I had never seen before (although @deusvulture says it’s famous)
Recurrent neural networks: what the fuck
(What’s especially impressive is that the parameters were fit using an individual piece of consumer hardware. High-end consumer hardware, but still. This isn’t just throwing giant clusters at the problem)
if you haven’t, check out talcos’s MTG card generation using RNNs (Can give link if you want). It’s where i learned about how hype RNNs are.
I have, and they’re hilarious, especially the “meaningful, novel, but profoundly useless cards”:
* When $THIS enters the battlefield, each creature you control loses trample until end of turn. * Whenever another creature enters the battlefield, you may tap two untapped Mountains you control. * 3, : Add 2 to your mana pool. * Legendary creatures can’t attack unless its controller pays 2 for each Zombie you control.
But it didn’t bowl me over in the way the article linked in the OP did, I imagine because generating M:TG cards is an unusully hard task. The game has a fairly strict syntax on top of English grammar, and also has a very large card base which already includes most card ideas beneath a certain complexity level (excepting “profoundly useless cards”), so the RNN has three separate problems to solve: “make cards that obey English grammar,” “make cards that obey M:TG syntax,” and “make cards that don’t strike an M:TG player as boring or pointless.”
By contrast, the Shakespeare generation in the OP article is an unusually easy task. Most of us don’t know much about Early Modern English syntax, and thus are used to Shakespeare sounding grammatically weird in arbitrary ways, so it’s hard to distinguish RNN grammar mistakes from authentic Shakespearean grammar. Also, Shakespeare is such a prestigious author that imitating him seems especially impressive for any given level of imitation quality.
(Of course I am making these judgments of difficulty in hindsight, with knowledge of how impressive the RNN output actually was, so they should be taken with a grain of salt.)