Install Theme

Probably a long shot, but do I know anyone who’s going to NeurIPS this year?

The idea of the Product Space can be conceptualized in the following manner: consider a product to be a tree, and the collection of all products to be a forest. A country consists of a set of firms—in this analogy, monkeys—which exploit products, or here, live in the trees.

lostgunch asked:

“I cannot! I cannot begin to . . . you mean this is your desire? To exchange sign-trains in the abominable realm, in lieu of arbitration? I was ready to settle our differences . . . our differences, sick, poison, is this how we must talk . . . I was ready, oh this abominable interface, I was ready, please stop . . ." --me when I explain something on slack at work and the other person suggests having a meeting about it instead

lmao

uploadedyudkowsky:

The event will be on Thursday March 4th from 7pm to late, at Kiener Platz, 101 Curry Pl.

Come early - the space is already getting full.

The venue: This meeting is hosted by the Knight Program of the MIT Library, number 93 Granary St, Cambridge.

The schedule:

7:00pm Gates open (instructions in The Recursive Economy)

7:30pm Welcome: Eliezer Yudkowsky (https://www.youtube.com/channel/UCk3T0PYFJuHObX0qQh6FXJXw)

8:00pm Introductions (https://www.youtube.com/watch?v=TmUE5aVFsXA)

8:05pm Segments (https://www.youtube.com/watch?v=S6yQWQJy8Hk)

8:30pm MKULTRA (https://www.youtube.com/watch?v=u2EYVQTKsMA)

9:00pm The ESRB Teen ratings (https://www.youtube.com/watch?v=km50XbyWJ4A)

9:05pm The Entity (https://www.youtube.com/watch?v=8U7ZPoWYN2M)

9:15pm The Truth (https://www.youtube.com/watch?v=3ZYgs06sLNw)

Add your name at this link if you plan to attend: https://secure.meetupstatic.com/photos/event/3/3/2/9/600_433004.jpeg

We hope to see you there!

uploadedyudkowsky:

If you have a cat that wears clothes, that’s a lot more complicated for your brain to understand. (Can’t we all just get along?) Still, humans have a lot of emotions around colors, shapes, and hues. Can a cat also have “anger”? I don’t think so.

A cat has not much to fear from storms, or accidents. Cats are not involved in hurricanes, and earthquakes cause primarily discomfort, and not death. Cats don’t usually die of smoke inhalation - but they get cold when the temperature goes below 85 degrees.

Today I want to talk about cats that wear clothes.

Note: Unless otherwise specified, all cat pictures on this blog are real cats, from a Pinterest board called “Funny Fancy Animal Clothes”.

Cats are famous for their sense of style. You can buy a reproduction coat, and then one day it catches on fire, and then the coat catches on fire. It’s called style, and it’s more fun to wear something that catches on fire than something that doesn’t. But, really, it’s just a general quirk of evolutionary psychology, that we assign honorific titles to things. Why would it be any the less surprising if your dog caught on fire?

Humans have three levels of cognitive organization:

The machinery of reasoning - reasoning from demonstrable evidence or modeled beliefs.

The machinery of categorization - objects with similar qualities, as a cluster, from a larger group; animals, for example, with a common ancestry, from their domesticaton of humanity.

The machinery of modification - a trait, or set of traits, which is assigned a special honorific title, such as “loving”.

The machinery of praise.

Humans in the former state don’t use “own names” for things, for they are inclined to classify as existing in a particular group, or for logical convenience, they can’t switch modes at will. But our mind works on high-level symbols, which has been all the past hunter-gatherers ever thought of up until now. Our minds know well enough to avoid using high-level mental content, because (1) it is so cognitively easy to put in the high-level patterns by checking only the surface of a surface object; and (2) it’s so complicated to think about any cats in general, without context, that only high-level patterns take a mind long enough to think about.

Take a human for example. We have words to describe “humans” and “cat” (so far as our own minds are concerned, for the purpose of emphasizing the first concept). But we don’t use any labels for cats, except a short little one for cats that we have met before. Why? Because humans have a natural tendency to think in terms of persons, as a convenient shorthand for personifying concepts that we will describe on subsequent occasions. If you know a standard name for a cat, it’s easier to see a cat; but once the cat is named, it doesn’t take much time to think of a category for the cat, to avoid immediately saying “This” or “That”, and one takes the word “cat” as a descriptive label for a new cat.

So, when two human subjects with very different experiences come to a psychic medium in “blind”, describing with words, their concurrent, they are likely to describe cats in a “cat” category, “human” category, or even “cat” as a separate label. The medium, I suspect, records them both, recording, on each occasion, the “sounds” that the two subjects “hear” from each “cat” category. And, as the idea of “human” suggests, the two cats seem to occupy different ontologically basic categories, as humans can describe someone as “human” even when we are missing the “person’s” body parts.

What do your shoes make noise like? Cats.

What are some things that make noise like cats? Cats.

And there’s much more for which we can find no label at all. Birds make no noise. But we do have labels for them - we name them, “bird” and so on - so we can learn what sounds like a “bird”. There’s a whole brain devoted to labeling, learning, and language, for things we haven’t encountered before.

As a result, we tend to think of everything as “human” or “cat” - especially new experiences that we don’t know how to categorize - and these labels stick around in our minds.

solardrifter:
“ DNA² OVA: Dokokade Nakushita Aitsuno Aitsu - D・N・A2~何処かで失くしたあいつのアイツ~ (1995)
”

solardrifter:

DNA² OVA: Dokokade Nakushita Aitsuno Aitsu - D・N・A2~何処かで失くしたあいつのアイツ~ (1995)

(via posthumanwanderings)

There’s something that irks me whenever I read or hear about self-driving car R&D.  It’s like … 

If you know anything at all about machine learning, you’ll react to the idea of self-driving cars with “wow, that sounds really hard.”

(It’s the perfect storm of factors that make an applied ML problem difficult: data modalities (radar, lidar) that haven’t gotten much attention outside of this specific application, iterated decision-making whose errors compound and take you off the training manifold, injury or death as the possible cost of a screwup during almost any given moment of operation, defined by the ability to operate for a long time w/o intervention so you can’t keep handing control off to the user and taking it back again, environment full of interacting intelligent agents whose complex emergent behaviors will shift unpredictably in reaction to your product, etc etc etc.  Plus all the normal engineering challenges that come with a hardware product)

This isn’t a reason for no one to be working on the problem.  In fact it’s a reason for at least one well-funded org to look at it, for the sake of sussing out the problem, determining whether maybe it’s secretly easy in unexpected ways.  At least, that would have been a good argument in 2014.

But it doesn’t sound like something where the core technology is already there, and we can just apply it.  It sounds (and sounded 5 years ago) like something where the core technology probably isn’t there, and at best you could hack together something that sort of basically works using a lot of money and brute-force case-covering (“let’s collect data on all frequent situations under all possible weather conditions”), and the thing would kind of suck, and would be a black box that would seem okay until suddenly it hits something too weird and goes haywire.

It’s good that someone looked into it.  As far as I can tell, what has been found is that the problem isn’t unexpectedly easy, that’s it’s exactly as hard as it seemed, that we don’t have a core enabling technology making it fundamentally feasible.  That at best we can spam billions of dollars into brute forcing the technology to do something it can’t do smoothly and natively.

We learned that, and in the course of learning it “we” apparently decided that now is the time to do self-driving cars, and now there’s this whole competitive industry working on this problem, employing thousands upon thousands of PhDs and eating billions of dollars, and there seems to be no way to societally say “okay, it was good that we tried, and we’ve learned we should wait.”

Instead, the headlines still look like this:

image

And presumably they will still look like that for N more years and $M billion more, pushing against the current, forcing something to happen that doesn’t want to happen, until finally we “have” “fully” “self-driving” “cars,” in whatever half-broken state the weird incentive structure of the 2010s hype cycle – in conjunction with the concurrent lack of real enabling technology – is able, at long last, to cough up.

Or maybe we’ll actually have self-driving cars that work, because the time will be right, because of one or more real breakthroughs, and if so those might be causally downstream from all this money and hype but they’ll also render a lot of the prior work irrelevant.  And maybe those breakthroughs, not “releasing self-driving cars in 2017,” are what we should have been chasing.  They’re what we should be chasing now, if anything.  Is society well-organized enough to do that, when it’s so clearly the right thing?

astrobozo:

mgsotacon:

image

really big fan of this piece

Two rats sharing lunch

Two rats sharing lunch

The vermin

The vermin

Enthusiastically munch

@birdblogwhichisforbirds

(via maxknightley)

His language, he says, “hath at least ten several synonyms” for every word, as well as “a wonderful facility – in making anagrams”. It outdoes every language known since it has 11 genders, seven moods, ten cases and “four voices, although it was never heard that ever any language had above three”. The names of soldiers express their rank, and the names of stars contain in the syllables their latitude and longitude. It had 35 letters – he would rewrite the alphabet too – and “words expressive of herbs represent unto us with what degree of cold, moisture, heat or dryness they are qualified”.