Install Theme

nostalgebraist:

Some of the mystifying things Esther has yelled in her sleep since moving here (exactly one month ago):

“I want the moon to kill me!”

“I don’t want to die, I want to eat potatoes!”

“The moon will die!  The sun will die!”

(These were from three different nights)

From last night (both in very distraught voice):

“Why is the moon made of the moon?!”

“Why does God want the moon?!”

glassyvegetation:

“Hmm, you seem to have done some difficult task. This indicates that the task wasn’t actually difficult or noteworthy. Proof: suppose the task was an actual achievement. That would imply you’re somewhat competent, which is trivially false, QED” - Insane Bad Brains Troll Logic

(via moral-autism)

From Barack Obama’s ability to overcome obstacles in his election races, to the design of the iPhone, the stoic philosophy has helped its users become world-beaters.

fipindustries:

IT’S COMMISSION TIME!

So here’s the deal, turns out i need to eat, surprisingly. And as of right now my country is not providing many alternatives so im opening up commissions.

I’m making a batch of five requests, given on a first come, first served basis. The prices are in USD.

If you want to request something contact me through my e-mail. faqundo23@gmail.com

if you ant to deposit money or just donate, this is my paypal link:

paypal.me/facundoavila

There we can discuss the request and payment. We will keep lines of comunication through that mail, please put “commission request” in the subject when sending me a mail so i know it’s you and not some bot wanting to sell me ugandan viagra or something.

You can pay me upfront or i can send you a rough sketch of the work and ask you to pay me half the price, then you can pay the rest when the piece is finished.

While i’m willing to do pretty much anything you ask, or at the very least give it a shot, i would recommend you to keep in mind my art style and take into account there are certain things i just don’t specialize in such as: Furry, Mechas, Porn, etc. i’m not saying i’m not willing to give it a shot, all i’m saying is, results may vary. If you want to get a better assesment of my ballpark go to my art blog: 

https://unbeknownsttomen.tumblr.com/

i will only take one NSFW request, again, first come first served. If you want a steamy, salacious piece of art and find that it’s already been taken you can PM me and i can reserve you a slot for the next time i open commissions but i’m warning you, it may take a while.

I want to clarify that these drawing don’t include backgrounds, backgrounds add 10$ to the price. i can do simple props at no extra cost per character such as, a chair to sit in, a bottle to drink from, a computer and a desk to work at. The limit to how many things you can add beyond the character will be subjected to my own judgement.

if you want to request something but you dont find it specified here you can send me an e-mail and we can discuss price and such there.

PAYPAL: paypal.me/facundoavila

(via fipindustries)

I don’t have the energy to properly effortpost about it right now, but a little while ago I got around to reading the paper “Opening the black box of Deep Neural Networks via Information” and it blew my mind a bit.

The overall picture it gave me is that deep neural nets, when trained with SGD, learn by first memorizing a lookup table for the training data, and then forgetting everything about the table keys that isn’t necessary to uniquely identify the table values.

Early on, the last layer encodes the requested value but also the whole key, like a dictionary {k: (k, v)}.  This then gets lossier and lossier for the first part of the tuple without getting lossier for the second, so the end result is like {k: (irrelevant noise, v)}.  At the very end of the network, the fully-connected-softmax or whatever scoops up the “v” part and ignores the “irrelevant noise” part.  Since the last layer now no longer has the information necessary to reconstruct the key, and since this loss of information happened over a long (”deep”) series of lossy steps rather than in a single lossy step of table lookup, we can hope that it’s now learned something meaningful and generalizable about the relationship of these keys to their values.

So, during training, they overfit first, and then regularize themselves.  In itself, not too shocking.

But here’s the kicker: the regularization part happens specifically because of SGD.  Once the gradients are small enough, they’re basically applying a random diffusion to the weights, and so we end up with a neural net that is “as random/generic as possible,” conditional on getting the answers right on the training data.

The authors argue that network depth helps here, because it speeds up the diffusion process.  Thus, the great success people have had with deep networks trained by SGD is attributable to “depth + SGD” being a synergetic combination, not by deep networks being an inherently great space of models and SGD being merely “good enough” as a way to descend loss functions in this space.

The main limitation of this paper is its reliance on a single invented data set for its results.  I get the sense they did this to make it possible to estimate mutual information well enough to get anywhere.  They do have a github repo where you can do the same analysis in other cases, which is neat although I haven’t tried it.

I saw a copy of some university magazine whose cover featured the phrase “Envisioning Inclusive Excellence,” and my brain immediately spoonerized this to “Envisioning Exclusive Incellence”

If you can stand it, the explosive wedding ceremony between tart and sweet is unique in the fruit world.

artist-ernst:
“ Illustration to “A Week of Kindness”, 1934, Max Ernst
Size: 13x18 cm
Medium: collage, paper”

artist-ernst:

Illustration to “A Week of Kindness”, 1934, Max Ernst

Size: 13x18 cm

Medium: collage, paper

(via shabbytigers)

Perhaps one day, all the sage grouse will hear out on the lek is the quiet whine of a fembot turning its head.