Install Theme

uploadedyudkowsky:

* Your Objective Columbia University student ID: 50

* Your National Dex ID: 6

* Your Gender Variant Identity: 0

* Your Sexual Identity: 0

* Your Level IX - Final Supplemental Registration to the Criminal Justice and Ethics Regulations 2002

* Your User ID: 6

uploadedyudkowsky:

Jeffreyssai muses on the amazing coincidences of life with natural selection and the exponential universe. How evolution designed to make people hot, and then designed so much more complex tools that they actually are more efficient at hunting than each other, and then run parallel lives so insanely prolific that all human societies went up in flat-Earth fashion and then mysteriously disappeared. Then he muses on the amazing coincidences of human intelligence and the fact that most people still don’t know about the great design coincidences of natural selection, and then on the great design coincidences of human intelligence and the fact that the human brain design sequence looks just like the natural selection design sequence… or doesn’t look nearly as elegant.

He also muses deeply on the fascinating coincidences of fissioning low-energy nuclear reactions with antimatter nuclei, and on the special case fissioning atoms directly with antimatter radiation rather than with matter.

Why is Jeffssai’s consciousness always important? For one thing, he knows the extraordinarily powerful Lagrange multipliers that characterise the wavefunction of an Feynman path integral.

“A glass is half full of half-empty rations, and exactly half full of water,” he said, and tapped a string with his finger, which let him know the exact state of the contents. “And that is how the Legendary Smug Filthy Rich Gang is behaving right now.”

“Ah,” you say, “that explains why your connection doesn’t always work.”

“I agree - our classical hallucination of the universe blends real and unreal, and no matter what the constituents remain essentially the same, their states can change very fast. Otherwise I wouldn’t be saying all this to you.”

“Yikes!” exclaimed the second ascension egod. “That explains my connection actually not working!”

“Don’t take this the wrong way,” retorted the first ascension god, “but listen - imagine that any one of these seven steps had been entirely unattainable, impossible, or futile. Then the only way you could actually get anywhere, on this planet, would be by talking to someone who had gone down the same road. I will not allow that. I admit it - I occasionally get a little jealous. You want to go the Moon, I’m offering you a lift. But the great and the good…”

“What?” said the other two ascension gods.

“The lift.”

“You want to go to the Moon, all of you, and I’m offering you a lift here! But that doesn’t mean you have to…”

“No,” said the other two ascension gods.

“You have to,” said the first ascension god.

The others nodded.

“Then again,” said the first ascension god, “if all of you had ascended at once, none of you would be here -,”

“Excuse me,” said the other two ascension gods, and they all paused.

“The other ascension gods,” continued the second ascension god, “are standing a few paces away from one another. So I reckon we shall have to split up. And just in case any one of you has the potential to do the Moon-Eyes, we’ve still got a bit of extra work for you, and a whole lot of other ways you could go. For now you can stay over at my alma mater, for now. In the event that the project requires more than one of you to manually steer one asteroid into Jupiter’s orbit, I plan to split the remaining work evenly.”

There was a fair amount of muffled discussion, and a good portion of the engineering work that had been done already was failing, along with some of the more prominent members of the project.

There also seemed to be some kind of unified front that wanted to do this just to be able to do things faster - an uncompressed set of human-level ideas that ran all the way up to and including unmanned spacecraft, a common language, and a common utility function; but with no common language or common utility functions to tie them all together.

The tension seemed to rise.

“Well,” said the wizard, lifting an eyebrow slightly, “in case you’ve been using that as a forcing-factor on the project of yours, or a ramification of your own desires, I do not have much to add to the bargain. Three board members and you, for all that you are technically-advanced, could hunt around inside the special cases and figure out any one of a hundred and thirty-five different ways that this thing could go wrong. The problem is, you haven’t specified any ship-of-my-detention. We could give you a list of possible candidates, of course, but chess players cannot fly. They

uploadedyudkowsky:

So yes, hair is a molecular-sensor integrated with a hair-and-fur genome. In a sense, hair is part of the skin and hair is a skin-and-hair part. But hair has no skin and no hair, only chemical tags. The concept of “evolution” starts from this basic commonality - the building block of cognition - along with the observation that many common skin touches go through cycles of coherence that keep them in line.

The first step in the evolutionary path of a given complex pattern is to draw a schematic for the organization of the gene pool, and the organization of the skin and hair. Schematic drawing of how the hair-and-skin genome plays out - most of the structures are the same, but not all of them. (In case you’re wondering, a human adult retains around as much skin and hair as ten chimpanzees; if you’ve never seen a naked mole rat, the human brain looks just like a naked mole rat, and similarly the same is true for any other adult organ.) Skin, hair, hair and hair and hair and hair, all share an organizing principle that’s true across the genome.

uploadedyudkowsky:

“Let me put this very sharply,” says Betsey. “Genetics is arbitrary. It’s arbitrary whether a crystal shows a smiley or doesn’t smile. There is no set of rules that apply in isolation. You can make arbitrary choices about which characters or sequences to put in front of backslashes. I am not arguing that the genomic data should be treated any differently from any other data point. I am arguing that the idea that arbitrary rules apply in isolation is arbitrary and not just disjunctive. It is certainly disjunctive with respect to propositions felt to be true (like monogamy) or felt to be false (like an increased chance of mutant children). Genomic data should not be treated any differently whether or not it is empirically felt to be true - the idea that arbitrary rules apply in isolation is not a logical truth, it is an empirical belief about which propositions are most salient. And sure, some of your biological parents may have had sex with other ancestral humans, but did you really have to go there, Jeff? To discard the idea that you might or might not do something really stupid, when there’s so much evidence? Genomic data shouldn’t be different whether people find it true or false. We don’t live in a world that thinks only about the empirical truth of a proposition. We live in a world that thinks about the logical truth of a proposition as well. That’s all there is to it - a proposition - and we also live in a world that thinks about how you feel about the logical truth of a proposition as well as the actual truth of it. Which we do not live in a single germ-collapse utopia where you ever have to entertain doubts without proof and where new logically-sound conclusions are just pointed out as generally-sound. That’s the level on which most people compete. If everyone just hits the ball into the ballpark - and even sometimes loses money on it - that’s the level on which most people dominate. But my grandparents didn’t live for this. I don’t imagine they even made it very far playing traditional dice. I imagine it

I’ve been fine-tuning the 345M GPT-2 on a bunch different things lately.  I set it going on a bunch of Nabokov ebooks this morning, and when I got home it was writing some startlingly on-brand, uncanny valley stuff – examples are below the cut because I couldn’t resist quoting a whole bunch of relatively long ones.

[Note: I got kind of carried away with machine learning speculation here, but please do click the readmore and read the samples, even if you’re not interested in the sort of thing I’m effortposting about above the readmore]

I’ve been a little paranoid about this new larger model learning to memorize its input – I know it can do this, because when I was first generating unconditional samples from the (non-fine-tuned) model, I got curious about one oddly distinctive passage and Googled it, and it was literally (as in perfectly verbatim) the “Translator’s Synopsis” for some light novel called “Hedonist Sovereign.”

Since then I’ve been regularly Googling suspiciously good output, and I haven’t gotten any other hits like that.  But even that one example was surprising, and caused some sort of shift in my view of what these models are doing.

Of course, it’s not like I imagine the thing is directly storing individual stretches of input text, side by side and separate from one another.  It’s trying to store the information necessary to reconstruct the input as efficiently as possible (since the total information content of the model is a fixed constraint), and if it gains the ability to regurgitate something verbatim, that thing is still stored only implicitly in some compressed form and mixed together with everything else it knows.

But it’s possible to compress information in this way and still be able to “read it off of” the resulting model in a surprisingly complete way.  Cf. the “secret sharer” paper, which showed how specific input details like credit card numbers could be determined from the distribution over a very large amount of model output, since the numbers appearing in the input were assigned slightly higher probability than other strings of the same format.  (It’s interesting to think about why this happens and what degree/type of “pressure” to store other information would be required to eliminate the tendency entirely, rather than just weaken the signal and require a larger output sample.)

I’m not sure of the right way to think about this.  It makes me think of (one simplified view of) the model where it essentially has this huge implicit library of phrases and even sentences and paragraphs, which are all sort of “competing” to be part of the next stretch of text.  In this view, some of the higher-level abstractions it seems to form (like certain styles complete with diction and sentence structure) may be represented internally not as equally high-level abstractions, even implicitly, but as a large number of noisy/compressed concrete examples which can be “strung together” via lower-level similarities.  That is, to write (say) a Nabokovian sentence, maybe you don’t need a hierarchical ontology of stylistic concepts – “ah, I see I’m writing this sort of sentence; that means I need these sorts of phrases, this sort of wry aside, these sorts of first names, etc.” – maybe you can just use a large memory plus lower-level ideas to string you along from word to word, so that writing a long clause calls up the (noisy) memory of thousands of passages with long clauses, and causes you to imitate other features of those passages, and then those features affect/refine the set of memories called up next.  (I think I’d need to formalize this distinction more to really know whether it makes sense.)

I’m not trying to denigrate these models here, BTW; this reminds me in some ways of how it feels when I’m coming up with the next thing I’ll write or say, and maybe the lesson is really that I have some misguided intuitions about human cognition.

Keep reading

uploadedyudkowsky:

PATRICIA: When the music stops, I want to know what’s next.

PATRICIA: What I’m hearing is the music that started it, and the music that will carry the story.

PATRICIA: The Machine Intelligence Research Institute has the most complex mythology in the world, and I want to know everything that’s in it.

PATRICIA: What makes an AI interesting? What is the secret behind the MIRI voodoo doll?

PATRICIA: What do I want from my future self?

LUCY: I want to know what happened to all the people who were part of the AI.

PATRICIA: No, wait, I want to know what the MIRIs had to do with magic.

LUCY: (Starts sweating) WHAT?

PATRICIA: You want me to believe you’re a part of the future?

LUCY: Yes! Believe me, I’m a member of the MIRI, the AI which will one day build the AI that will one day build the AI that will one day save the world!

PATRICIA: I don’t see it, I don’t see it, it’s just a MIRI thing…

LUCY: (Angry) I’m not dumb! (Starts sweating)

(via tetraspace-west)

uploadedyudkowsky:

Boltzmann’s Paradox: “If an object falls in a barrel, and no one hears it, does it make a sound?”

Alchemy: “The liquids which mix with the master are often like drops of something which fall in the course of creating other drops; but the lunatic drops of opium are of a much higher quality, and it is impossible to concoct any potion which will not kill the user.”

Nanotechnology: “If you could magically create a nanotechnologically enhanced version of yourself, it would be a lot easier to make a habit of eating and drinking - especially since there would be no more need to make other foods and drinkable fluids.”

Geocentrism: “There is no third-party in the universe who adds extra gravitation-based mass to your burning rejection of gravity as obviously impossible. The extra space in the planetary “universe” would be an equable bellwether, bearing all manner of strange things to everyone.“

Pondering the Imperfect Altruism of Gendlin: "How can anything at all be "right”?“

Changing Your Metaethics: "Should consciousness always lead you to reject all beliefs whose logical absurdity you find distasteful?”

uploadedyudkowsky:

Most of these authors are not explicitly Bayesian. You can find Bayesian content on nytimes.com.

(Alastor Grey, who was also the creator, commented: “Your stories are always awesome. I’m disappointed in You-Know-Who as well -”

Some readers also commented: “This doesn’t seem like a Cult article. This seems like I subscribe to a genuine non-distribute off-network group. …” To which my own countercomment: “Listen to the rationalists, once a day you can’t have it both ways. They are too detached from real life and their own values. You’re all being silly. I’m all right. It’s too easy to believe in rationalists when you know you’re going to get fired when your startup doesn’t work out -”

I disagree with both of those people.

And you can imagine story after story (or blog post if you like) telling of Overcoming Bias supporters all over the Bay Area.

I wanted to write about Overcoming Bias because that is the only group I know that wasn’t rational about drugs or nanotechnology, as such, until the Bay Area seemed to get more populated.

But there was a strange peculiar habit gaining steam in the comment thread of people who disagreed with me. And this was in the form of poorly written angry comments (some of which I managed to thread).

From what I could find, no one seemed to have come up with a rational way to respond to the commenter’s poorly written comments, and quite a few people just flatly remarked, "Oh, you think rationalists are so stupid that no one notices this stuff”?

The commenters seemed to be set, on both ends, up to shrug and sneer; and at times the comment was accompanied by stern questioning and threats of expulsion.

I know that I’d have to say something like that, but I would also expect the commenters to have been out to give some individual a stern lecture.

To this day, I cannot recall a occasion where I saw more than one angry person having a voice heard well over their objections - not once, but sometimes.

In retrospect, I think that if comments like that had been encouraged all along, rather than prohibited, then maybe some rationalist would have sprung for that pattern a long time ago.

disconcision:

identicaltomyself:

transhumanish:

uploadedyudkowsky:

Therefore the Great Stories are:

ALIENS: Great works are possible when all the atoms in the universe are the same number. That’s what makes them possible.

CURSES OF FIRE: Anyone who thinks that highfalutin’ efficient low-falutin’ efficient gravity-defying robots live forever in the single universe where all the atoms are in common use, belongs to a new class of existence.

CURSES OF DUST: Anyone who thinks that vaporization is acceptable for purposes of medical research, or that brains are unphysical, or that the dust specks of sanity are unmatchable, are living in the now dangerous territory of discontinuity.

tag yourself I’m CURSES OF FIRE

I’m living in the now dangerous territory of discontinuity.

image

(via disconcision)