Install Theme

Inadequacy and Modesty →

nostalgebraist:

Yudkowsky’s new sequence/book is on, roughly, the topic of how to know whether there are low-hanging fruit in a given area.

This is very promising: one of my criticisms of the rationalist movement has been that it used to talk a big game about low-hanging fruit (this being my favorite example), and after e.g. the failure of MetaMed it looked like the movement just quietly dropped the subject rather than either (1) soldiering on, or (2) openly declaring that it was correcting course and no longer believed the low-hanging fruit stuff.

This new book looks like it will be in the “soldiering on” category, which could go well or poorly, but at least the topic is not being quietly swept under the rug.

Update: I finally got around to reading these posts.  I wasn’t impressed with them.

The basic gist is something like:

“There are well-established game-theoretic reasons why social systems (governments, academia, society as a whole, etc.) may not find, or not implement, good ideas even when they are easy to find/implement and the expected benefits are great.  Therefore, it is sometimes warranted to believe you’ve come up with a good, workable idea which ‘experts’ or ‘society’ have not found/implemented yet.  You should think about the game-theoretic reasons why this might or might not be possible, on a case-by-case basis; generalized maxims about ‘how much you should trust the experts’ and the like are counterproductive.”

I agree with this, although it also seems fairly obvious to me.  It’s possible that Yudkowsky is really pinpointing a trend (toward an extreme “modest epistemology”) that sounds obviously wrong once it’s pinned down, but is nonetheless pervasive; if so, I guess it’s good to argue against it, although I haven’t encountered it myself.

But the biggest reason I was not impressed is that Yudkowsky mostly ignores an which strikes me as crucial.  He makes a case that, given some hypothetically good idea, there are reasons why experts/society might not find and implement it.  But as individuals, what we see are not ideas known to be good.

What we see are ideas that look good, according to the models and arguments we have right now.  There is some cost (in time, money, etc.) associated with testing each of these ideas.  Even if there are many untried good ideas, it might still be the case that these are a vanishing small fraction of ideas that look good before they are tested.  In that case, the expected value of “being an experimenter” (i.e. testing lots of good-looking ideas) could easily be negative, even though there are many truly good, untested ideas.

To me, this seems like the big determining factor for whether individuals can expect to regularly find and exploit low-hanging fruit.

The closest Yudkowsky comes to addressing this topic is in sections 4-5 of the post “Living in an Inadequate World.”  There, he’s talking about the idea that even if many things are suboptimal, you should still expect a low base rate of exploitable suboptimalities in any arbitrarily/randomly chosen area.  He analogizes this to finding exploits in computer code:

Computer security professionals don’t attack systems by picking one particular function and saying, “Now I shall find a way to exploit these exact 20 lines of code!” Most lines of code in a system don’t provide exploits no matter how hard you look at them. In a large enough system, there are rare lines of code that are exceptions to this general rule, and sometimes you can be the first to find them. But if we think about a random section of code, the base rate of exploitability is extremely low—except in really, really bad code that nobody looked at from a security standpoint in the first place.

Thinking that you’ve searched a large system and found one new exploit is one thing. Thinking that you can exploit arbitrary lines of code is quite another.

This isn’t really the same issue I’m talking about – in the terms of this analogy, my question is “when you think you have found an exploit, but you can’t costlessly test it, how confident should you be that there is really an exploit?”

But he goes on to say something that seems relevant to my concern, namely that most of the time you think you have found an exploit, you won’t be able to usefully act on it:

Similarly, you do not generate a good startup idea by taking some random activity, and then talking yourself into believing you can do it better than existing companies. Even where the current way of doing things seems bad, and even when you really do know a better way, 99 times out of 100 you will not be able to make money by knowing better. If somebody else makes money on a solution to that particular problem, they’ll do it using rare resources or skills that you don’t have—including the skill of being super-charismatic and getting tons of venture capital to do it.

To believe you have a good startup idea is to say, “Unlike the typical 99 cases, in this particular anomalous and unusual case, I think I can make a profit by knowing a better way.”

The anomaly doesn’t have to be some super-unusual skill possessed by you alone in all the world. That would be a question that always returned “No,” a blind set of goggles. Having an unusually good idea might work well enough to be worth trying, if you think you can standardly solve the other standard startup problems. I’m merely emphasizing that to find a rare startup idea that is exploitable in dollars, you will have to scan and keep scanning, not pursue the first “X is broken and maybe I can fix it!” thought that pops into your head.

To win, choose winnable battles; await the rare anomalous case of, “Oh wait, that could work.”

The problem with this is that many people already include “pick your battles” as part of their procedure for determining whether an idea seems good.  People are more confident in their new ideas in areas where they have comparative advantages, and in areas where existing work is especially bad, and in areas where they know they can handle the implementation details (“the other standard startup problems,” in EY’s example).

Let’s grant that all of that is already part of the calculus that results in people singling out certain ideas as “looking good” – which seems clearly true, although doubtlessly many people could do better in this respect.  We still have no idea what fraction of good-looking ideas are actually good.

Or rather, I have some ideas on the topic, and I’m sure Yudkowsky does too, but he does not provide any arguments to sway anyone who is pessimistic on this issue.  Since optimism vs. pessimism on this issue strikes me as the one big question about low-hanging fruit, this leaves me feeling that the topic of low-hanging fruit has not really been addressed.


Yudkowsky mentions some examples of his own attempts to act upon good-seeming ideas.  To his credit, he mentions a failure (his ketogenic meal replacement drink recipe) as well as a success (stringing up 130 light bulbs around the house to treat his wife’s Seasonal Affective Disorder).  Neither of these were costless experiments.  He specifically mentions the monetary cost of testing the light bulb hypothesis:

The systematic competence of human civilization with respect to treating mood disorders wasn’t so apparent to me that I considered it a better use of resources to quietly drop the issue than to just lay down the ~$600 needed to test my suspicion.

His wife has very bad SAD, and the only other treatment that worked for her cost a lot more than this.  Given that the hypothesis worked, it was clearly a great investment.  But not all hypotheses work.  So before I do the test, how am I to know whether it’s worth $600?  What if the cost is greater than that, or the expected benefit less?  What does the right decision-making process look like, quantitatively?

Yudkowsky’s answer is that you can tell when good ideas in an area are likely to have been overlooked by analyzing the “adequacy” of the social structures that generate, test, and implement ideas.  But this is only one part of the puzzle.  At best, it tells us P(society hasn’t done it yet | it’s good).  But what we need is P(it’s good | society hasn’t done it yet).  And to get to one from the other, we need the prior probability of “it’s good,” as a function of the domain, my own abilities, and so forth.  How can we know this?  What if there are domains where society is inadequate yet good ideas are truly rare, and domains where society is fairly adequate but good ideas as so plentiful as to dominate the calculation?


In an earlier conversation about low-hanging fruit, @argumate brought up the possibility that low-hanging fruit are basically impossible to find beforehand, but that society finds them by funding many different attempts and collecting on the rare successes.  That is, every individual attempt to pluck fruit is EV-negative given risk aversion, but a portfolio of such attempts (such as a venture capitalist’s portfolio) can be net-positive given risk aversion, because with many attempts the probability of one big success that pays for the rest (a “unicorn”) goes up.  It seems to me like this is plausible.

Let me end on a positive note, though.  Even if the previous paragraph is accurate, it is a good thing for society if more individuals engage in experimentation (although it is a net negative for each of those individuals).  Because of this, the individual’s choice to experiment can still be justified on other terms – as a sort of altruistic expenditure, say, or as a way of kindling hope in the face of personal maladies like SAD (in which case it is like a more prosocial version of gambling).

Certainly there is something emotionally and aesthetically appealing about a resurgence of citizen science – about ordinary people looking at the broken, p-hacked, perverse-incentived edifice of Big Science and saying “empiricism is important, dammit, and if The Experts won’t do it, we will.  (There is precedent for this, and not just as a rich man’s game – there is a great chapter in The Intellectual Life of the British Working Classes about widespread citizen science efforts in the 19th C working class.)  I am pessimistic about whether my experiments, or yours, will bear fruit often enough to make the individual cost-benefit analysis work out, but that does not mean they should not be done.  Indeed, perhaps they should.

jumpingjacktrash:
“ jazzhell:
“ tripleclown:
“ myfrogcroaked:
“ Here is the best animal face you will see today!
This is a Saiga antelope (Saiga tatarica), a species listed in CITES Appendix II and evaluated as critically endangered by the IUCN Red...

jumpingjacktrash:

jazzhell:

tripleclown:

myfrogcroaked:

Here is the best animal face you will see today!

This is a Saiga antelope (Saiga tatarica), a species listed in CITES Appendix II and evaluated as critically endangered by the IUCN Red List. It lives in Asia (Kazakhstan, Mongolia, Russia, Turkmenistan, Uzbekistan).

Source: CITES

thats god

We have to save her

this is a goofy lookin beastie and i love it

(via rangi42)

glimmersight replied to your postnot-even-even replied to your post “Without…

~in any prior era, no one thought that skin color was a qualification for societal reward~ you really think so?

i know right

It’s downright surreal that the same guy could write that and also write stuff like, say, this, which manages to be cautious and relatively even-handed on a topic as currently politicized as the history of Islam.

It seems clear that he’s selectively turning his brain on and off (so to speak), but rarely is the difference between ON and OFF states this stark to an outside observer.  And I’m still at a loss about why he chooses OFF in some cases and ON in others.

nostalgebraist:

not-even-even
replied to your post
“Without realizing what I was doing, I just binge-read a bunch of posts…”
that’s the kind of thing that can only be hacked by “this stranger is an idiot and I have the full right to not give a shit about their delusions.”

I do give a shit, though, not because I owe anything to a stranger (I don’t) but because it feels like it exposes how little I truly know.

The writer I was talking about almost certainly knows more information relevant to the “civilization question” that I will ever know, period*.  This is disquieting.  I don’t care about the guy but I do care about my own (always precarious!) sense that I know more than nothing about the world I live in.

(*I want to say “just trust me on this,” but: this follows from what I know of my own reading speed and memory, plus some very limited principle-of-charity stuff to the effect that this guy has actually read most of the books he says he has read on Goodreads [faking this would be quite an achievement, as the dates of reading-completion are distributed in a realistic way, he regularly writes very long reviews that cite many specific details, and these details check out in the cases where I have read the book in question])

FWIW, the spell I was under last night seems to have been broken.  I still don’t have a knock-down argument against the guy, but I feel comfortably certain that this is because his (implicit) position isn’t even coherent enough to enter the figurative ring and get knocked down.

I mean, I was pretty sure of that last night too, but my emotional response was lagging behind my actual opinion.

For anyone who is curious and/or masochistic: the learned-yet-still-horribly-confused reactionary I was reading last night has a blog here and a Goodreads account here (n.b. most of the blog consists of cross-posted reviews from the GR account, and the reviews can be more accessibly browsed on the latter).

If nothing else, this stuff is worth reading as a testament to the fact that you can acquire vast quantities of legitimately valuable historical knowledge (cf. the many long reviews of history books) and still, through the magic of compartmentalization and partisanship, end up writing shit like this:

Second, extreme ignorance and irrationality characterize the vast majority of political discourse today. In any prior American era, an uneducated person who offered neither reasoning nor evidence would not have dared to offer his opinion in the public square, for he would have been laughed at and humiliated by all other participants. “Delete your account,” or its 19th Century equivalent, was not considered a suitable riposte in the days when thousands came to see, follow, and discuss the hours-long Lincoln-Douglas debates. Nobody thought the opinions of ignorant and unintelligent entertainers were of any importance or consequence. Nobody would have thought, much less put forward, the idea that traits such as skin color or activity in the bedroom were qualifications for societal acclaim and reward, while accomplishments by those with the wrong skin color or wrong social views were the mere happenstance of their supposed “privilege.” If you did voice such ideas, you would have been punched in the face to general applause, or sent for psychiatric evaluation (by a doctor who recognized gender dysphoria not as a sign of virtue, but rather as a severe mental illness). Failure to follow basic logic was a one-way ticket to ignominy and obscurity in any national political actor—or it would have been, had any such mental defectives aspired to national office. Today, all these gross defects are the norm, further reducing any common ground.

Anonymous asked: Just wanted to let you know I spent way too much time replying to your comment on Marginal Revolution concerning Logical Induction. I don't expect a reply, but hope at least somebody will read it. Should spend less time defending other people's papers. Best, Lee Wang.

Funnily enough, I saw this message just after returning to tumblr from MR, because I wanted to repost here the reply I wrote to you!  So, your time was definitely not wasted from my perspective, since it led me to think about an aspect of the topic I had not thought about before.

My reply:

Thanks for the reply, Lee. I agree with you that assigning probabilities to theorems *at all* is a nontrivial problem, and insofar as the LI paper moves forward our understanding of this problem, that is good.

However, I am not convinced that the paper ever gives us a satisfying “assignment of probabilities to theorems.” What it gives us is two things — the limiting probabilities P_{\infty}, and the finite-time “probabilities” P_n. The former has some pleasant properties like assigning 0 < P < 1 to sentences independent of the axioms, but is essentially irrelevant to the original motivation of “how do we reason about things we can’t prove yet?”, since it is obtained “at the end” after every possible deduction has been made. (So it can make use of every possible proof, and the only thing it does above and beyond deduction is to put numbers on independent sentences.)

On the other hand, the finite-time P_n do not even have to form a probability measure at any time, and we cannot use them to do the sorts of things we would like do with probabilities, like decision theory. For instance, the authors define expectation values at time n but only prove that they behave well in the limit, and indeed the definition at time n involves an arbitrary choice which the authors justify because it washes out in the limit (see p. 40). Of course, any way that P_n fails to be a probability measure will disappear in the limit, because the limit is P_{\infty}; but if we let ourselves wait for P_{\infty} we forfeit any ability to reason probabilistically about theorems before they are proven. If we want to do that, we have to use the finite-time P_n, but in fact we cannot reason probabilistically with these at all.

not-even-even replied to your post “Without realizing what I was doing, I just binge-read a bunch of posts…”
that’s the kind of thing that can only be hacked by “this stranger is an idiot and I have the full right to not give a shit about their delusions.”

I do give a shit, though, not because I owe anything to a stranger (I don’t) but because it feels like it exposes how little I truly know.

The writer I was talking about almost certainly knows more information relevant to the “civilization question” that I will ever know, period*.  This is disquieting.  I don’t care about the guy but I do care about my own (always precarious!) sense that I know more than nothing about the world I live in.

(*I want to say “just trust me on this,” but: this follows from what I know of my own reading speed and memory, plus some very limited principle-of-charity stuff to the effect that this guy has actually read most of the books he says he has read on Goodreads [faking this would be quite an achievement, as the dates of reading-completion are distributed in a realistic way, he regularly writes very long reviews that cite many specific details, and these details check out in the cases where I have read the book in question])

Without realizing what I was doing, I just binge-read a bunch of posts (mostly book reviews) by one of those people who thinks that everything good about “civilization” is the result of Christianity.  (So: insofar as people have been able to be civilized without Christianity in some times and places, it is either because they are culturally Christian or because they aren’t really civilized upon sufficiently close inspection.)

On the face of it this is easy to refute, but when it is espoused by a person who seems to be very historically literate, it sends my mind into a maddening maze.  How can such a person be dissuaded?  You can exhibit various examples of good stuff without Christianity and bad stuff with it, but of course the person is aware of these examples, and has some theory about how True Civilization consists in precisely those nice social phenomena that arose first in Christian societies.  It’s hard to deny that some very important and good social phenomena did in fact arise first in Christian societies, and you can’t re-play the tape with perturbed initial conditions to show this was a mere accident.  So how might you possibly convince such a person that they’re wrong?

What do garlic and onions have in common with gunpowder? A lot. They’re incendiary. They can do harm and they delight. Sulfur is central to their powers.