Install Theme

Initially Peter preferred eating the paint to placing it on a canvas. He especially liked the tart flavor of the cobalt blue.

Likely due to Passions’ school-aged target audience, the show often presented large, wild storylines for the summer, which often took place outside of Harmony. In 1999, a carnival came to town as characters were introduced; 2000 saw the Prom Boat Disaster storyline and 2001 witnessed the failed double-wedding of popular couples Luis and Sheridan and Ethan and Theresa, and their subsequent journey to Bermuda, where Sheridan apparently perished in a boat explosion and Theresa wound up married to Ethan’s ex-stepfather, Julian Crane. In 2002, Julian and Timmy set out on a journey in the magical land of Oz as Theresa was “executed” for Julian’s “murder”; 2003 saw six characters (Chad, Whitney, Fox, Theresa, Ethan, and Gwen) travel to Los Angeles for the summer (and into October), while, in 2004, Luis and Sheridan traveled to Puerto Arena, Mexico, to retrieve his younger sister, Paloma (and ended up finding his missing father, Martin, and her “dead” mother, Katherine). The plot of the summer in 2005 was a deadly earthquake and tsunami, which destroyed much of Harmony and resulted in the death of James’ mother, Maureen, while 2006 saw the extravagant Passions Vendetta plot,[34] in which Alistair lured seventeen people (Whitney, Simone, Paloma, Chad, Ethan, Theresa, Gwen, Lena, Spike, Jessica, Maya, Noah, Esme, Fancy, Luis, Beth, and Marty) to Rome, where he planned to take over the world with a chalice stolen from the Pope’s private chambers; the plot saw the death of Lena, Maya, Alistair, Beth, and Marty.

Summer 2007 saw the resolution of the “blackmailer” storyline as Vincent Clarkson was revealed to be the half-man/half-woman blackmailer, and Luis Lopez-Fitzgerald was saved from execution for Vincent’s crimes by Endora’s spell that turned back time in the execution chamber. In 2008, the show spent its final summer on the air wrapping up its plotlines at a rapid pace, with Alistair Crane being killed once and for all, the final showdowns between the main characters and the newly introduced villains Viki, Juanita, Pretty, and Vincent, Tabitha’s redemption as a born again Christian who sacrifices her powers to save the residents of Harmony, the return of Antonio and his reunion with Sheridan, the mass weddings of Fancy and Luis, Paloma and Noah, Miguel and Kay, and Edna and Norma (the first gay couple ever to go down the aisle on a soap opera), and Gwen and Rebecca being exposed for their crimes as Theresa and Ethan finally married.

exploringtheeldritch:

loki-zen:

michaelblume:

another-normal-anomaly:

michaelblume:

another-normal-anomaly:

Tired: videos depict real things

Wired: your voice and appearance can be easily faked, trust no one

Inspired: this is the situation Stephen Hawking has been in for decades

I always wondered when I was a kid when Hawking appeared on The Simpsons/Futurama if they just used the same synthesizer without saying anything to him, if they got him to sign off on the script first, or if they actually had him in the studio.

Also, I’m not super enthused about any of his songs, but it’d be wrong not to mention MC Hawking at this point.

MC Hawking is complete trash and I fucking love it because I am also complete trash.

I’ve had Ninja Sex Party constantly in rotation lately and absolutely do not get to judge anyone.

I wondered about that as a kid myself when he did a cameo in a Hitchhiker’s Guide to the Galaxy radio show (IIRC he played Deep Thought). I remember thinking it was awesome when I found out they did actually get the actual guy.

It’s possible that it would be very difficult to source the exact voice he uses as a simulator, they probably don’t make it anymore. (I remember reading that he uses a very old model which he programs himself - more realistic-sounding voices are now available, but he stuck with this one because it felt like his voice, is what I remember reading in an interview.) 

(it would still be easier to make it look like he was saying something than anyone else because you could cut it together from recordings of him and it wouldn’t be so noticeable, you wouldn’t need to match it to the lips moving, and most people won’t notice if what he’s saying doesn’t match his body language.)

additional tangent under cut

Keep reading

If I recall correctly, Stephen Hawking’s voice is based on DECTalk, and I think last I checked legit DECTalk software licenses are *really expensive*. DECTalk is also the voice of the Moonbase Alpha astronauts - “aeiou? aeiou! John Madden!” - although I think that’s a later version. Actually, I think maybe DECTalk used to make *hardware* speech synthesizers, and that’s the version Steve got, so there might not even be a software synthesizer available that matches his voice? I’m going to Google this.

I googled and found this, which was a really fun story.

(via exploringtheeldritch)

nostalgebraist:

typicalacademic:

nostalgebraist:

Automatic parsers for natural language are pretty good these days.  I use the spacy one all the time, and although it occasionally makes mistakes, it’s reliable enough that almost all of my parsing-related bugs come from code I put on top of it (or from ungrammatical input).

This makes me very curious why people don’t use them as components in deep learning architectures for text.  For neural machine translation, chatbots, etc., the popular models all use “attention” modules that emphasize certain parts of the (representation of the) input when producing each part of the output, or “self-attention” which a similar thing inside of the encoder and decoder (not between them).  This allows them to sort of learn how syntax works.  But everything is still tied to this idea of a sentence as a “sequence,” where you say “okay, I’m producing word #7, what information do I need to do that?”  

This is a weird question, because “word #7 in a sentence” is not a natural category, and the relevant information depends on what word #7 is doing syntactically, among other things.  (N. B. there are fancier positional encodings than just word #, but they’re all positional.)

If you’ve written the six words “I, who enjoy tasty food, will” then the next word is going to be a verb with word #6 as its auxiliary and word #1 as its subject, and words #2-5 are only relevant for semantic context.  OTOH if you’ve written “When choosing a restaurant, I usually” then the next word will be a verb with word #5 as its subject, word #6 as an adverb, and words #1-4 are only relevant for semantic context.  Etc.

It seems much more natural to have a decoder that makes a syntactic tree piece-by-piece, rather than a sequence, with the words in the tree ending up wherever they have to be.  Likewise, we could have the encoder to take a syntactic tree as input, and perhaps use tree-like structures for the latent representation.  This means we don’t have to learn grammar on top of the rest of the problem domain, it ensures grammatical output, and it gives us representations of long-range dependencies that don’t degrade as we insert arbitrary numbers of words (relative clauses, etc.) in between.  Since we have good automatic parsers, we can automatically make trees to feed to the encoder, and we can automatically make training data for the decoder even if we don’t have a hand-parsed corpus for the problem domain.

If I weren’t so busy I’d be trying this out myself (and probably running into all sorts of unexpected pitfalls, but that’s research for you).

#admittedly this is all only as good as the parser and the parser may well be the kind of model i’m arguing against

yep, spaCy is built on top of those sequence models :P it’s actually a really cool architecture that slightly gets away from the “everything is a sequence” thing: a sequence model/RNN produces a “summary” of the sentence, but those summary vectors then get used to make decisions about how to add each word to the tree structure you’re building. (And people put attention in here too of course.) But most of the processing still happens in the sequence model, with very generic rules to help ensure the tree ends up semi-grammatical.

That said, using syntax features is really useful and a lot of neural models do actually still do it for more complicated tasks. Giant neural stacks just sound cooler and get more press. (also they work better right now but I feel like that’s at least partly a byproduct of the hype which shifts research focus, not the cause)

@disconcision​ said: any engagement with the object-level is apparently considered cheating

I mean, there’s reasons for that! Grammar is complicated and has lots of exceptions, and it’s different for every language. Good parsers for English are the results of insane amounts of effort, both exhaustive-search-via-grad-student for the best techniques and vast amounts of linguistic annotation. If that effort hasn’t been put into another language—say Hindi—then your parser sucks and will put an upper bound on the accuracy of anything you try to do with it.

Yeah, that all makes sense.  I guess what really frustrates me is the current state of affairs for people (like me) who want to use these technologies to do things.

The vast majority of ~fancy neural~ stuff out there, both in available pre-trained models and even papers I read, is entirely end-to-end.  There are exceptions, like using Inception features as input to some other thing, but most of the time (certainly in the neural NLP stuff I know about) it seems like we treat every task as completely distinct and train it end-to-end.

This is fine if you want to do exactly what some group of researchers have already done with a neural model (although if they haven’t made pre-trained weights available, training data may be a problem), but usually you aren’t, and having so little freedom to compose anything is weird and frustrating.  I wish there was more interest in neural components that consume or produce things other than “end” input and output.  Kinda feels like a world with no APIs or libraries where we have to rewrite all functionality from scratch to make one product, and then again from scratch to make the next.

(ETA: I guess pretrained word embeddings are one exception, so that’s nice)

I finally read the paper on the ELMo word embeddings, and while they aren’t nearly as fancy in architecture as the tree-based stuff I was proposing in the OP, they are v. much the kind of thing I was wanting in my second post here – composable intermediate representations for neural NLP, meant to be pre-trained and dropped in to various architectures and for various problems.

This paper has been turning a lot of heads, because this particular drop-in component gives you amazing gains over state-of-the-art, in both performance and sample efficiency, across various tasks.

TBH, I’m not sure how much of this to chalk up to ELMo being “good” and how much to chalk up to existing NLP work being bad/suboptimal.  I’ve been doing a lot of stuff in this space lately and it was initially very weird to me how big the tasks were – like, is “coreference resolution” really one thing on a linguistic level?  And on the other end, all of these models have to learn basic syntax and everything from scratch, given only the training objectives for these fancy human-level tasks.  It makes sense that ELMo helps, because it uses a pre-trained generic language model to capture a lot of the basic stuff about a language and takes some of the load off of the (still overloaded) task-specific models.

But again, it feels kinda like rediscovering the concept of code re-use in a new domain – better than not having that concept, but a bit frustrating that we have to, uh, figure it out from scratch again rather than transferring it from experience in other domains.  (After all, *Yann LeCun voice* this is all still programming, just differentiable programming)

(via nostalgebraist)

Some things Esther has said tonight which have a #quotes-like quality:

“It was the first thing that came to my mind that would be bad for an ear to be made of.”

“There have to be some rationalists who like dill.”

Their fervent requests for visits were callously refused, however, and instead of enjoying family reunions, they found themselves prisoners at a pro-wrestling extravaganza.

disexplications:

I took another look at the OpenLibrary data dump I downloaded last year, and this time I wrote a script to automatically pull out books for certain subject ranges, filtering out the ones with incomplete information. The metadata in these entries is not very consistent, and I don’t even know how many of the entries have Dewey Decimal classifications at all, so I may be leaving out a lot of stuff.

I decided to fire up a neural network to generate philosophy books; i.e., everything in the Dewey Decimal 100s, excluding the psychology and paranormal ranges. It turns out that there are repeating patterns in the titles of these books that the RNN can easily learn, and a lot of the output is too plausible to be entertaining. But turn up the temperature a bit, and it produces some interesting and occasionally ominous titles:

  • Existence of man, 1964-1977 edited by Kenceth Steptandias. Continuum, 1993.
  • Philosophy of the tree by Choel Sowell. The Culture Publishers, 1975.
  • Fundamentals of conduct: the use of death by Jani Spencerta. Dodd, Vinese, 2001.
  • Unex-man: a case without human philosophy by Sujan Raconk. Verso, 1990.
  • Aesthetic issues in health care: an interpretation by M. Ernest Young. The newly Darielstein, 1882.
  • Origins of technology and deconstruction: inhoral guide for morals and negration by Edward C. Fissey. Dordit Communicational Book Onford on Present Co., 2009.
  • Animal explanation: a comparison of discourse by Lorri Fearden and John Abel and William P. Weinstein, Jr. Clarendon Press, 1991.
  • The Concord of nothingness by Jean Unitz. Leaves Books, 1994.
  • Revise time by Nigel Tuller Morr. I.C., 1991.
  • Thought in children: hermeneutics, and consolations by Galbert Gorster. Oxford University Press, 2012.
  • Acres of flowered to human undol-consciousness: essays and ethical story by Deborah Dewey. Young Fragne University, 1970.
  • On uncomments by Clare C. Chablich. Bhotters, 2009.
  • Breed matter and thinking: a guide to unformations by David A. Garfsch. Scudommariege Books, 1978.
  • Theories of the computational standards for continental philosophy by R. Jersoff and the Dreavend by Bruce B. Wilson. Harper & Row, 1969.
  • Negation of Marcus Ayer edited by William D. Wertheimer. MIT Press, 2001.
  • Whitehead’s moral issues in nursing consciousness by Christopher Burke and Michael C. Smith, Jr. Edenshine-Merrill University Press, 1992.
  • Resolving accountants edited by J.A. Harris. Princeton University Press, 2008.
  • Introduction to the late sun: the west and the perception of friendship by Dr. Hadluce; . A. Krishnamurti. Barshbandra Sasson, 2004.
  • What real world: a Phenomenology of managers by Kuven Rist. Enscort Book Company, 1925.
  • Essentialist approach to learning human human love edited by Jacob Baschild. Allyn and Lang Litfore Press, 1972.
  • Man and Iximan by Kevin Heidegger; translated by Susan Kreay. State University of New York Press, 1986.
  • Holes in medical ethics by Graham Kim. Oxford University Press, 1994.
myfairynuffstuff:
“Igor Mosiychuk (b.1963) - Title unknown. Watercolour.
”

myfairynuffstuff:

Igor Mosiychuk (b.1963) - Title unknown. Watercolour.

How To Make Everyone In Your Vicinity Secretly Fear And Despise You | Current Affairs →

memecucker:

decadent-trans-girl:

potential future developments of Taylorism. coming to a workplace near you? hope not.

Corporatized Struggle Sessions?

(via memecucker)