Install Theme

birdblogwhichisforbirds:

nostalgebraist:

nostalgebraist:

Some of the mystifying things Esther has yelled in her sleep since moving here (exactly one month ago):

“I want the moon to kill me!”

“I don’t want to die, I want to eat potatoes!”

“The moon will die!  The sun will die!”

(These were from three different nights)

From last night (both in very distraught voice):

“Why is the moon made of the moon?!”

“Why does God want the moon?!”

Other things I have reportedly said in my sleep:

“I support keeping you alive.”

“God is made of cheese”

“God wants us all dead.”

“I don’t fucking care what you do to me, dad. As long as you don’t, like, paralyze me or kill me, I just don’t care.”

(via birdblogwhichisforbirds)

deleonism:

thyrell:

deleonism:

thyrell:

i think yall are just pretending to have an excess of black bile so the doctors will give you more leeches

leeches are for treating an excess of blood. an excess of black bile is treated with mercury-based laxatives. smfh

are you questioning my craft

yes, i am. everyone knows imbalances of different humours are solved with different treatments. leeches for blood, laxatives for black bile, emetics for yellow bile, apophlegmatisms for phlegm. it’s simple and honestly i’m ashamed that you’re disseminating misinformation like this

(via averyterrible)

• Gateshead Council DOES NOT use 5G technology in any of its street lights, or in any other capacity. It has never done so.

• The street lights in Gateshead will not give you cancer.

• The street lights will not induce miscarriages in pregnant women, or cause insomnia, or nosebleeds, and they are not killing all the birds and insects.

• Gateshead Council is NOT carrying out secret government trials in 5G technology via our street lights.

napoleonchingon:
“crimesagainsthughsmanatees:
“ Comic from 8-31-2014
PATREON
”
But also, it’s worth saying that this but unironically can be good life advice. So last year I was working at a job where I had an average of one day off every two weeks....

napoleonchingon:

crimesagainsthughsmanatees:

Comic from 8-31-2014

PATREON

But also, it’s worth saying that this but unironically can be good life advice. So last year I was working at a job where I had an average of one day off every two weeks. And I would be waiting for that one day, hoping to finally do a fun thing, go exploring, whatever.

But to the surprise of no one reading this, it’s the days off that were actually the worst. I would sleep for 12 hours, wake up groggy, eat some cereal, go back into bed. Then it’d be 3 pm, I’d have done absolutely nothing, and I’d know that I wasted my only chance for two weeks. I would hate myself intensely, and just sit there staring at a wall being abjectly miserable until it got dark.

This year, I am still working at this job. The days off situation is a little bit better, but not by much. But I allow myself to be lazy. I tell myself that after two weeks of work, it’s not reasonable to be ready for an adventure: you need a day to relax and vegetate, and get little errands done. It’s fine to catch up on sleep.

And in the end, I actually end up doing somewhat more, because I spend less time and energy hating myself for what I’ve become, but that’s not even the point. The point is that not feeling bad for being lazy is the thing that makes my current life tolerable.

(via sungodsevenoclock)

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

ancientouroboros:

Entirely for @hellmandraws‘ amusement, and to defend America from the charge of being “weakass babies” I’m going to liveblog eating licorice candy.

image

okay first of all, the packaging. there’s a cartoon monkey ecstatically making love to a candy monkey. Perhaps an indicator of the orgasmic bliss I’m about to experience. 12/10. my hopes, like the people who designed this bag, are obviously very high

image

the candy looks like rocks and not jaunty little monkeys. huge disappointment. I had to recreate stonehenge to rally my flagging spirits. 2/10

First taste: wow this is salty! I think I actually like this. I love anise so I’m pretty sure this is going to be a trip to flavortown. 8/10 me rn:

image

OMG THE SALT WORE OFF IT’S SO MUCH WORSE THAN I EVER IMAGINED.

IT’S LIKE EATING A SHOE.

IS THIS CANDY?

IS THIS WHAT MAKES SCANDINAVIANS SO POWERFUL?

image

I’m chewing and it won’t go away

it’s stuck to my teeth, I’ll be tasting this forever. shards of this will be discovered in my teeth when my body is excavated from an archeological dig tens of thousands of years in the future. somehow the smell has traveled up through my nasal cavity and all I can sense, hear, or experience is licorice. the world is an empty vessel filled with remorse and the cloying smell of decay. I’m at the nadir of my existence. -100/12

somehow, here, standing at the edge of eternity, the darkness that consumed me birthed me anew. I’m not only ready for another candy, I’m eager. I can, nay I must, immediately eat another

oh wow it’s salty! 8/10

this time I’m ready for the salt to wear off. 

I WAS NOT READY

the flavor this time was different, and somehow so much worse. instead of the leather of a shoe, it was like eating an entire shoe factory. the industrial rubber of the forklift tires, a hint of diesel as secretive as a volkswagen scandal, a soupçon of hot tin roof, the sweat of non-unionized labor, and a pervasive sense that while we’re all in this together, some of us are more all in this than others. 1/10 throw off your shackles, taste buds

I can’t believe it but I’m into this. I like this. shocked and disgusted with myself, I shove 2 more into my mouth concurrently.

conclusion: I’ve become addicted to licorice candy. what is in this. how do I get more. I hate this? I hate this. I willingly admit I’m a weakass baby. 100/10 will cycle through destruction and rebirth willingly and with open eyes, albeit it with teeth that will never again be clean.

(via injygo)

Feel the relief and strength of the next step in proprietary nascent iodine, developed using our Thermodynamic Pressure Sensitive High Energy Sound Pulse Nano-Emulsion Technology that allows for a highly unique and powerful nascent iodine that is both concentrated, and free of unwanted additives and genetically modified ingredients to make sure that your organic nascent iodine supplements are the best for your body. 

i am such a good husband and it is definitely me typing this, even though i am simultaneously in another room getting food for my wife bc she has a cold and is hungry but also wants to be in a blanket bc she is a fusspot.

also i learned the word fusspot today and i spent like, ten minutes laughing at it and it was very cute of me

definitely me here, makin sure you got all the facts.

furioustimemachinebarbarian asked: MIRI is in a tough position though. No one really has much of an idea of what a global AI would look like, and our current machine learning approaches that beat human performance aren't scary in the same way. For instance, when they introduced corrigibility, Orseau and Armstrong very quickly showed that Q learners were already corrigible. If you get money earmarked for safety what else can you possibly do besides argue about properties the unknown global AI might have?

I think they used to be doing more of that (with stuff like tiling), and these days the focus is not on “what would the AI be like” but “what final theory of ideal reasoning should we use, when talking about how an AI approximates an ideal reasoning theory.” This is not specifically about AI anymore, and is more like philosophy or probability foundations research, which would have wider implications if progress were made.

They are also pretty upfront about not expecting the AI to look especially like these ideals (there is a Soares post somewhere about this) — it’s more about describing impossible perfection so you can specify how and to what extent each real thing, including any AI or you or me, fails to be perfect.

I guess if I were working on this I might try to come up with a theory of actual (resource bounded) intelligence first, so I could then see what kind of safety measures could in principle be strapped on to something “intelligent”? I’m not optimistic about doing this, but it seems like my version of the thing they are trying to do, which I am even less optimistic about.

the perverse paradise of HRAD

I’ve been reading some MIRI / Agent Foundations stuff over the last few days, and I’ve gotten the following impression – probably nothing new here, but perhaps this will be in slightly sharper relief than earlier posts I’ve made on the same topic.

MIRI’s “Highly Reliable Agent Design” (HRAD) research program is founded in the idea that we need to state impractical (or non-practical) ideals about reasoning first, so we know what real AI programs should be aiming for, before we go on to judge those real programs.  This sounds reasonable on the face of it, but has led this research into a cul-de-sac: it now consists mostly of technical work on the problems with its own chosen ideals, problems which largely arise from the very idealizations that separate the ideals from practical programs.

This research is fundamentally about the weirdness that can arise when you try to talk about reasoning or computation while being deliberately casual about requiring these processes to do useful work in any specifiable situation, as opposed to in an abstract paradise where they are allowed resources that are, in one or more ways, infinite or impossible.  This casualness gives rise to some unique, trippy problems that can only be problematic in the abstract paradise, not “in practice,” but nonetheless provide one with plenty of hard work if one wants it.  Moreover, while creating these “theoretical-only” problems, the vast resources of the abstract paradise solve all of the usual problems of real-world inference by themselves.  One is left with a perverse focus on only the problems that do not arise in practice, like a textbook on “the mathematics of classical mechanics” which describes the Weierstrass function but never defines the derivative.

That post about HRAD that I approvingly linked provides some good outside view arguments, but I think this more inside-view stuff is equally damning if not more so.

Specifically, the ideals in HRAD tend to suspend requirements like:

  1. Your process must be computable.
  2. Your process must finish in some finite (even if impractically large) amount of time.
  3. Your process must produce usable results after seeing only some finite (even if impractically large) quantity of data.

Suspending these requirements tends to enable brute force search over very large spaces of programs or strategies.  Solomonoff suspends 1+2 (but not 3) and searches over all programs to explain a finite data sample.  Logical inductors suspend 3 and sort of 2 (but not 1) and search over all poly-time programs, obtaining useless results (with no e.g. coherence guarantees) at finite times and useful results only as the data set becomes infinite.

Even at its best, there is a fundamental vacuity to this sort of thing.  When you can consult all programs, or all practical programs, all of the reasoning has been outsourced and you are left playing a sort of managerial role over reasoners.  These methods do encode a few ideas about Occam’s Razor or logical uncertainty, but otherwise feel like being told “the best strategy is to find the best strategy, and then do what it says.”

In particular, I personally suspect that any sensible theory of “intelligence” or “good reasoning” will be a theory of grappling with resource constraints.  It’s only due to resource constraints that one might do “smart” things like constructing concepts, i.e. picking out certain clusters in your observations that make for especially good nodes in simplified causal models of the world (or something like that).  The HRAD ideals either outsource work like this to their program set (as in logical inductors), or do away with it entirely.  The latter is true of Solomonoff, which (if the universe is computable) can access an exact copy of the universe and simply crib its predictions from the actual future.

That last point leads us into the other side of the coin, that the HRAD ideals introduce “only-in-theory” problems.  The paradise can provide so many resources that the plenitude begins to defeat itself in bizarre ways; if you have access to entire universes, their inhabitants may be able to infer that you have this access, and use it to mess with you.  This is, of course, only a problem if the ideal is achievable in reality, which it isn’t, which would then seem to suggest it isn’t a problem after all, even for the ideal – but regardless of how these things cash out, we are still worrying over a problem that arises not just from “impractical assumptions,” but from a deliberate suspension of reality itself.

That is, these problems are unique to situations which literally cannot arise – to actual, rather than potential, infinities.  The goal of HRAD is to derive an ideal theory assuming the actual infinities, then relax to approximate results in the potential infinity case (i.e. given finite, but extremely or arbitrarily large, resources).  But the actual infinities create problems that the potential infinities do not, and so the research time allocated to HRAD is currently dedicated to solving hard problems in an impossible world, for the sake of transferring the end results to easier problems in possible worlds.  This strikes me as, well … perverse.

artist-courbet:
“Chateau de Chillon, Gustave Courbet
Medium: oil,canvas”

artist-courbet:

Chateau de Chillon, Gustave Courbet

Medium: oil,canvas

(via artist-savrasov)