Install Theme

[Note: grandiose, partly aesthetic, not entirely literal]

Reading some of the comment threads on/about Sarah’s post “EA has a lying problem (which is a very good post IMO).

The other day, in a long rambling Facebook comment, I said I liked the way EA (or at least GiveWell) only assumes that its audience comes in with certain widely shared moral values, and not with strongly fixed preferences toward specific causes or people.  That their audience is morally committed, but uncertain about facts (and aware of it) – they want to help but they don’t claim to already know how.  There is a humane quality to do, an openness to new testimony from previously unfamiliar parts of humanity.   The downtrodden feel no less pain if they do not happen to feature in prospective donor’s own provincial mental map (yet).

But then, the point of asking these questions, with this admirable openness of mind, is to eventually get answers, and make the mind less open.  If you ask how best to help, and you get a satisfying answer, you now “know” how best (!) to help.  If you ask to be told the most pressing causes, you’ll get some very pressing answers, and you may find them more pressing than your curiosity about the unknown.

It’s the explore/exploit dilemma, and eventually you have to start exploiting.

But in the classic explore/exploit dilemma, there’s just one agent, who has to balance the two.  For them, more exploring means less exploiting.  But the world contains more than one person, so why not divide the labor?  The “researchers” (in this case charity evaluators, but the concept applies much more widely) maintain their open minds and stay on the lookout for new possibilities.  The “advocates” learn continuously from the researchers, but commit themselves to specific issues in ways that would go against the whole purpose of the endeavour if they were researchers.

It seems to me like there is much havoc in EA because people want “EA” to be both researcher and advocate, and refuse to delegate labor.  The attitude implicitly says that “EA” should be a concerted whole that acts like a single agent in an explore-exploit dilemma, where the “effectiveness” consists in the tendency to do well on the dilemma as a whole.  But then, to show effectiveness, you need to show you have an especially good strategy for the version of the explore-exploit dilemma in play, which is actually pretty tough to do.  Discouraging harsh internal criticism is a move that suppresses exploration in the name of exploitation, and precisely when and where to perform such moves is the whole of the dilemma.  If you’re making these calls blindly, or without awareness of the problem you’re trying to solve, you’re probably not using some especially clever strategy to get better (”effective”) results than others who have faced the problem.

On the other hand, if your strategy is just “individuals can keep exploiting like they always do, but in the meantime we’ll keep exploring and anyone looking to exploit can always check out our latest evaluations” – well, that does seem like a distinctive thing that someone ought to be doing.  That was how I originally understood the claim to “effectiveness” – the ability to cheat the dilemma by peeking at the notes of someone who’s been happily exploring without caring about their score.  That’s a real thing, a good thing.

nianeyna:

nostalgebraist:

hot new discourse: “effective altruism is bad because giving things to poor people makes them dependent on you, giving you power over the oppressed”

*whispers* the power (over the oppressed) was inside you all along……………

It’s actually a known problem that foreign aid to impoverished areas can have detrimental effects on their economies: if a good is being produced locally, and you provide that good for free, you run a significant risk of putting the local producer out of business. It’s a form of neocolonialism and it’s bad, although I don’t doubt that the intentions are good in most cases.

This obviously doesn’t mean that charitable giving is bad. It’s not an inevitable effect. However, it does point up that altruism is not simply a function of money-in-utility-out. This is a valid criticism and I don’t think you should dismiss it so lightly.

Yes, this kind of effect is a serious problem.

However, I don’t think it’s a problem for the kinds of charities favored in EA.  The health interventions done by the AMF, SCI and Deworm the World are the sorts of things that ideally would be done by a local government program and aren’t easy for the market to supply.

In the AMF case, there are more-than-additive benefits to having wide bednet coverage in a population (vector control), and the WHO recommends that bednets be distributed either for free or with sufficient subsidies that cost is not a barrier to their availability.  Tanzania did achieve universal bednet coverage in 2011 after a campaign of free mass distribution, and has been trying to maintain it through a combination of free distribution to schoolchildren and vouchers for pregnant women.  This is the kind of public health target that is just not going to happen through the ordinary market mechanism without intervention.

SCI and Deworm the World do mass drug distribution – treating the disease is much cheaper than testing for its presence, and the treatment drugs don’t have serious side effects, so it’s cheaper to just give everyone the treatment without testing.  The drugs are given to schoolchildren.  This is the kind of thing naturally done by a government health ministry rather than through individual purchase decisions, and indeed that is how it is done in practice; SCI and DtW assist with these local government efforts.

GiveDirectly just gives people money.  In principle this could cause various economic distortions, and these are being looked into, but it’s not the kind of thing that can put a local producer out of business.

That’s all four of GiveWell’s top charities.  “EA” isn’t identical to “GiveWell’s top charities,” but “give to GiveWell’s top charities” is sort of the generic off-the-shelf “how to be an EA” instruction.

The complaint I was vagueblogging about was a brief post without any specifics about particular charities or about particular distorting effects (like the one you mentioned).

(via nianeyna)

masturbation: post-game analysis

Under the cut I talk about what I really think of the silly argument I made in this post, if for some reason you’re interested.

Keep reading

Is masturbation an effective intervention?

Note: This post is satire.  (However, I have not made any deliberate mistakes in the reasoning or computations.)

This post is an attempt to evaluate the effectiveness of masturbation, as performed by individual effective altruists.  “Effectiveness” here is meant in the usual sense of utility (as measured in, e.g., QALYs) gained per dollar invested, as used for instance by GiveWell to evaluate cost-effectiveness of charities.

Of course, as masturbation typically has no effects on others, it cannot be considered an act of “altruism,” effective or not.  However, although utilitarianism assigns no greater moral weight to oneself than to others, it also assigns no lower weight to oneself than to others.  Hence, insofar as effective altruists are interested in increasing global utility by any means, self-pleasuring acts have no special status and can be directly weighed against acts that benefit others.


To evaluate the effectiveness of masturbation, we must determine both its cost and the utility it adds.  First, consider the cost.  As there is no “price to entry” for masturbation, we can only evaluate the cost as an opportunity cost – say, the wages lost if an episode of masturbation is substituted for an episode of paid labor with the same duration.  Typically individuals do not directly face this tradeoff, but in some cases they may, for instance if one has the option of taking on additional paid hours at the cost of time that would otherwise be allocated to masturbation (or vice versa).

This post involves enough uncertainty and enough distinct data sources that only order-of-magnitude estimates will be attempted.  We will assume a salary of $80,000/year, and while this may be far too low or too high for any given individual, it will not be off by many orders of magnitude.

Thus the cost of a masturbation episode, for the purposes of this analysis, is simply its duration times ($80,000/year).


What utility is gained in an episode of masturbation?  As a first approach to this question, consider the overall utility difference a non-asexual person would incur if deprived of their sex drive.

An approximation to this question was investigated in Wilke et. al. 2010, in which men with prostate cancer made tradeoffs involving a treatment which could extend their life at the cost of “profound lack of sexual desire and erectile function.”  The men (mean age 72) were given a time trade-off question, as is standard in determination of QALY weights.  The mean time trade-off utility was 0.78, meaning that a year with sex drive and function was valued at 0.78 years without.

In other words, time spent with sex drive and function is 1/0.78 ≈ 1.28 times as valuable as time without.  (Obviously, these results are severely limited by sex, gender and age; we will treat them as universal here as a first approximation.)


The utility gained from sex drive and function is not uniformly distributed over time; it is primarily concentrated in the sex act itself.  There may be other utility gains from mood and health effects of sexual desire and activity and from the sexual drive as a contributor to social well-adjustedness, as well as utility losses due to the difficulties involved in seeking sexual activity.  However, it seems intuitive that the overall effect of sexuality on human preferences is dominated by the desirability of the sex act itself rather than by these peripheral effects.  So will we assume that if a given unit of time “with sex” confers more utility than the same unit “without sex,” this is due solely to the subsets of this unit in which sexual activity is occurring.

For instance, if a year “with sex” is 1.28 times as good as one “without sex,” sexual activity itself must be much more than 1.28 times as enjoyable as the average activity, since one typically spends only some fraction of a given year masturbating or having sex.

How large is this fraction?  Reece et. al. 2010 reviews data for men found in the 2009 National Survey of Sexual Health and Behavior.  For consistency, we must consider data for men of ages comparable to those studied in Wilke et. al. 2010, which in this case we will take to mean the “70+” age category.  Some of the data is shown below (we have chosen to exclude “anal intercourse,” which is rare enough in the 70+ age category to be negligible):

image

Averaging over these data, the average man of age 70+ masturbates about 24 times per year and has sexual intercourse about 20.3 times per year.  (Note that these rates may be different for men with prostate cancer.  We will ignore this difference here.)

Survey data indicates that penis-in-vagina intercourse (i.e. “vaginal intercourse” in Reece et. al. 2010) lasts around 6 minutes on average.  We will assume that masturbation episodes are also 6 minutes in duration.  This implies that the average man of age 70+ spends 144 minutes per year masturbating and 122 minutes per year having intercourse.

It is commonly observed that masturbation and sexual intercourse are not equally desirable.  Thus we introduce the parameter μ, defined so that μ minutes of masturbation are interchangeable with 1 minute of intercourse.  The total time spent in sexual activity, in units of “equivalent minutes of masturbation” are thus 144 + μ*122.

A year “with sex” is thus made 1.28 times as enjoyable as a year “without sex” solely by the contribution of 144 + μ*122 minutes, which are some factor β times more enjoyable than their equivalents in the year “without sex.”  In the unrealistic limit μ = 1, this gives β of about 558.  With μ = 5, β lowers to 197.5, while with μ = 10, β lowers further to 109.7.


As stated earlier, we take the cost of a masturbation episode to be ($80,000/year) times the episode’s duration.  To make the computation simple, consider a hypothetical year spent masturbating.  Thus $80,000 is lost, but the year confers utility β*(one year) rather than 1*(one year).  For β = 558, for instance, this can be interpreted as 557 years of life gained.  Thus we would spent $143.6 per year of life gained.

The above estimate corresponded to the unrealistic μ = 1.  With μ = 5, we instead spend $407 per year of life gained, and with μ = 10, we spend $737 per year of life gained.  Increasing μ further will of course produce even lower estimates of efficiency, but very large values of μ are likely to conflict with the results of introspection.  (We encourage readers to estimate their own value of μ, then perform the computation themselves.)

Converting to units of “lives saved,” as described here, gives us $4308 per life saved with μ = 1, $12210 per life saved with μ = 5, and $22110 per life saved with μ = 10.  It will be useful here to consult GiveWell’s remarks on cost-effectiveness:

We consider anything under $5,000 per life saved (or equivalent, according to one’s subjective values about how to compare other sorts of impacts to lives saved) to be excellent cost-effectiveness. We consider anything over $50,000 per life saved (or equivalent) to be excessive for the cause of international aid, as it implies more than an order of magnitude higher costs than the strongest programs.

Thus masturbation is unlikely to be “excellent” by GiveWell’s standards (for international aid interventions), but probably not “excessive.”  (Assuming μ takes integer values, it would be “excessive” only if μ > 24.)

GiveWell’s standards are possibly the most stringent in existence, meant for identifying the very best charities.  Medical interventions costing up to $30,000 per QALY gained are often considered cost-effective; by this standard, $737 per QALY (μ = 10) is quite efficient.

Recall that this post only intends to estimate orders of magnitude.  A closer analysis may reveal masturbation to be somewhat more or less effective than indicated here – for instance, it may be quite ineffective by GiveWell’s standards.  But it is unlikely to be very ineffective.


The above is, of course, merely an analysis of the average episode of masturbation, and care must be taken when applying it to the marginal episode of masturbation.  EAs who masturbate at a rate typical of their demographic category may encounter strongly diminishing marginal returns if they introduce additional masturbation episodes.  However, given the remarkable cost-effectiveness estimates given here, EAs are strongly encouraged to reflect on whether or not they have reached this limit.  If an EA considering a masturbation episode estimates that it will have an impact on their utility close to that of the average masturbation episode they engage in, they are strongly encouraged to proceed.  This choice is straightforward if there is no tradeoff with other life concerns, but the above analysis indicates that the choice may be utility-maximizing even if traded off against an equivalent time spent making money, when considered in terms of that money’s potential to produce effective outcomes when donated.  For instance, an EA who obtains an income of $80,000/year (the figure used above) for the purposes of earning to give should consider that some of the time spent earning this income could be spent equivalently-or-better on the task of, as it were, “masturbating to give.”

My EA, like Scott’s, is “deontological.”  Don’t do what maximizes “effectivness” right now.  Do effective things that you would want most other people to do.

We’re never (by definition?) going to have a world where everyone is a high earner, and we’re shouldn’t have a world where everyone is an ascetic.  But a world where everyone gives 10% of their income could exist and looks pretty good to me.

This removes the crushing guilt, the sense of infinite duty – and also, in the long term, it “wins.”  An ascetic can do a bit more good at the cost of a lifestyle no one wants to copy.  You can get the same extra bit of good by getting some number of people to copy a copy-able lifestyle – and then they get their friends to copy them, and it builds exponentially.

mttheww asked: which charitiy(ies) do you give to? sorry if this is a weird question; just curious in light of the recent discussion about ea stuff

shlevy:

nostalgebraist:

GiveDirectly, Deworm the World Initiative, and the Bronx Freedom Fund.

Curious, why DWI over AMF?

I honestly don’t remember.  I may have chosen randomly between GiveWell’s top four, or I may have reasoned casually (and perhaps not justifiably) that “malaria stuff is the classic example of EA, so it would be the default and I should choose another cause to counteract that effect.”  If that isn’t a good argument I might switch to AMF.

I do know I chose GiveDirectly because the idea seemed cool, many were skeptical of it despite some promising evidence, and I was interested in contributing to “testing” it.

mttheww asked: which charitiy(ies) do you give to? sorry if this is a weird question; just curious in light of the recent discussion about ea stuff

GiveDirectly, Deworm the World Initiative, and the Bronx Freedom Fund.

andrewhickeywriter:

reddragdiva:

tenaciousvoidcycle:

So a lot of shit about EA is showing up on my dash. I have a better post that I’ve been working on, but fuck it, tumblr is the place for shitty, thoughtless, over-emotional posts. so I’m just going to shitpost here.

I completely agree with EA as a practice. If you want to donate to charity, then you should donate to the charity that gives you the best bang for your buck. I agree with the ideas expressed in “Famine, Affluence, and Morality”, where, even if you are not a utilitarian, we can probably say that welfare is important and that you have some obligation to donate to (effective) charities with the obligation becoming greater the more wealth that you have. I would add the addendum that we should have room for our own personal projects, but I fundamentally agree that most people in wealthy Western states are behaving profoundly immorally by not donating enough of their wealth to those in need. States themselves are behaving immorally by not devoting far more of their budget to effect charities. 

With that being said, the EA movement centered around the Bay Area makes me really uncomfortable. Apparently, pro-MIRI people are so powerful that you can’t even disinvite them from a conference? A person is part of EA if they think that they are doing the most good, even if they are not? What even is this? MIRI is a horrible organization. Donating to MIRI makes it more likely for the bad thing MIRI is fighting against to actually happen. MIRI being part of EA is like the Make A Wish foundation being part of the movement. Worse, because at least Make A Wish is probably doing some good! That someone’s just believes that there actions are the most effective in promoting the good seems like a shitty way of judging whether someone should be part of EA. Perhaps we should ask something like “is it reasonable to believe so”. Watch! I believe that murdering seven people at high noon on Feb 29th will bring about Eutopia. Clearly, I am part of the EA movement. The appropriate response should be, your reasoning is shit, you are wrong. That someone believes something is a real poor standard. The question should be more like, do they have good reason to believe it. There does not seem to actually be good reason to believe that MIRI is an effective charity at all. 

I am more sympathetic to the argument that pro-MIRI people are so powerful that you can’t just tell them to fuck off. It would harm the movement too much. But, if anything, that seems like an indictment of the movement. As far as I can tell, pro-MIRI people emerged from the LessWrong subculture or through reading HPMOR. Since the goal was to teach people to be rational, it seems clear that the LessWrong subculture and HPMOR have failed miserably because it lead to people believing MIRI was an effective organization. The rational parts of the EA movement (AND I ACCEPT THAT SUCH EXIST! I AM NOT MAKING THE CLAIM THAT EVERYONE INVOLVED IN EA IS TERRIBLE! I AM NOT MAKING THE CLAIM THAT EA IS BAD! I AM NOT MAKING THE CLAIM THAT ONLY BAD PEOPLE ARE PART OF EA. I KNOW A LOT OF COOL PEOPLE WHO ARE PART OF EA! I AM MAKING THE CLAIM THAT THERE IS A PROBLEM WITH THE EA MOVEMENT AND IT SHOULD BE FIXED) should try to disassociate from these groups as quickly as possible and start finding other groups to work with. I do not blame the rational parts of EA for putting up with MIRI right now, you have to do what you have to do, but I really hope that they start dissociating themselves from the shitty part of the movement and start trying to reach out to other groups. EA is a good idea and I would hate to see it brought down by the dead weight of the pro-MIRI people (along with other non-rational tendencies in the movement).

and this is obvious to anyone outside watching this and apparently incomprehensible to anyone inside.

honestly, the intellectual parkour and special pleading is a sterling example of what i mean by “i have no faith that people with such a remarkable array of terrible ideas are going to somehow come up with good ones this time around, or not smuggle in their terrible ideas with any good ones they accidentally have.” defending the terrible ideas this hard - and, i must note again, funding miri as ea comes at the literal expense of funding the mosquito nets - strongly suggests the terrible ideas are … the arguers’ actual goal.

altruism that is effective: excellent idea! the effective altruism subculture: perhaps not so much. reifying verbs or adjectives is frequently a mistake.

Yep.

I give to the top GiveWell charities, and give extra to the Against Malaria Foundation – not so much because they’re effective but because I got bitten a lot by mosquitoes a few years back and anything that wants to kill off those little vampire bastards is all right with me. I don’t give the 10% of my income that the EA people talk about, but then I’m also in a proper job that pays normally, not the kind of non-job that pays silly money for being a “futurist” or something.

(I also give to a few non-GiveWell endorsed charities that do things I consider important)

But there is simply no way – none – to justify giving to MIRI on “effectiveness” grounds. They have saved no lives. They have freed no prisoners. They have housed no homeless people, fed no hungry ones, cured no sick ones. Even donkey sanctuaries actually help old donkeys.

The only arguments for donating to them as being effective in any way are precisely the things that Yudkowsky himself identified as “Pascal’s mugging” – they claim a tiny chance that the world will end if you don’t give them money, and that quadrillions will be maximally happy if you do. They provide no evidence to support that claim.

Of course even if you do believe their claim… well, I don’t see a robot God whose goals are set by technolibertarians as being a particularly positive thing, but YMMV there…

I am a little confused about the boundaries here.  How much can one do effective altruism without touching the “Effective Altruism subculture”?  Is GiveWell safe, given that it has hired LWers and its founder has read the vast majority of the Sequences and says he finds very little there to disagree with?  What about GWWC and 80,000 Hours, both founded by frequent LW commenters?

I’m being a bit facetious here; my argument is that all the organizations that provide real help to people doing “good EA” (e.g. AMF) are also connected to the the “EA subculture.”  It’s not really consistent to say “no good ideas are going to come out of the EA subculture – oh, by the way, I keep up with GiveWell recommendations.”  (Not that either of you are exactly saying that.  I am just confused what being “EA, but completely distrustful of ‘the subculture’” would look like in practice.  I mean, sure, you can just ignore all of these organizations and figure everything out yourself, but I’m not sure why one would want to go to the trouble of re-deriving “AMF is effective” out of some worry that when GiveWell says “AMF is effective” their words are secretly infested with invisible MIRI-flavored nanites or something.)

(via andrewhickeywriter-deactivated2)

fierceawakening:

Re this conversation

http://nostalgebraist.tumblr.com/post/138996390349/lostpuntinentofalantis-nostalgebraist-that

I think I finally realIzed what confuses me about EA.

People talk about performing morally right actions based in some kind of rough consequentialist calculus. And say other ethical theories make no sense.

That covers a whole swath of possible things to do. One moral theory, one huge set of all possible human action.

But when they drill down, they seem to USE EA as a REMINDER TO GIVE TO CHARITY, which is a far narrower set of actions to see oneself as required to do.

So the whole “maximize utility even when you sneeze” probably doesn’t practically apply, because the times EA shows up to you are the times you’re budgeting or thinking about whether to splurge on a coffee, not the times you’re considering how to do activism, what jobs to apply for, etc.

Whereas for me, the terrifying thing about what I’m calling the larger concept is… It seems scary for the same reason SJ is, to me, because for me, Doing Good means trying to change policy. Doing Good means effective ACTIVISM.

Because I might be very wrong on this, but to me, offering people money doesn’t make lasting change. All too often, It builds one bridge that people take a nice picture of for the Westerners and then collapses again. Whereas working to change the structure of society to make it juster – fighting discrimination, making enough success to survive a little easier to achieve – changes things in a lasting way.

So for me, “right action” is going to equal “giving to charity” pretty rarely.

And effective activism is not maximizable. Trying gets you cults.

EA is, as I’m sure you know, not generally interested in “effective activism,” and this is one of the most common criticisms it gets.

Part of what I was trying to say in the OP of that thread is that I think you can get most, or all, of EA without needing this sort of “find the best of all possible actions” calculus, which tends to confuse and distress people in practice.

We start out with “I would like to help people in dire straits,” and then we look at the various options for doing this, and some differences become apparent.  For instance, certain kinds of dire straits can only be fixed with a lot of money, while some take less.  For example, when I look at the Bronx Freedom Fund, the average bail they post for someone is $790, and in some cases it has been as low as $250.  I’m not going to give them $790 all at once, but I feel like if I give them a reasonable monthly donation I’m actually making a sizable contribution to actually helping a person.  If that number were $790,000 instead, I would feel differently.

Here, we’ve already gotten a significant part of “effectiveness.”  It’s more appealing to help people out of dire straits if it costs less per person helped, because I want to help people, and that means I can help more people, given the money I’m willing to spend.

The other, more controversial part of effectiveness is ranking types of dire straits against one another – like, if it costs $790 to pay someone’s bail but it (hypothetically) also costs $790 to prevent someone from dying, is the latter preferable?  And one can try to philosophically find the “right answer” here, but one can also just use one’s intuition and emotion, and say, well, all else being equal I care more about dire straits when they’re direr.  (”Which is more dire” comparisons are not always easy, and get into controversial stuff like QALYs, but in many cases the comparison can be made easily and intuitively.)

And in all of this reasoning, I’ve never said “I am a utilitarian and I want to do the best of all possible actions.”  I haven’t worried that “helping people out of dire straits” is somehow good yet not good enough.  I just look into my heart and find that, hey, I’d like to help people out of dire straits sometimes, if I can.

Now, yes, you can say that this doesn’t lead to lasting change – the dire straits will keep happening at the same rate.  In the bail example, say, I really think we need to reform the bail system, not just keep paying people’s bail.  But that doesn’t mean that paying people’s bail is entirely worthless.  (Remember, we are not trying to find the very best action here.)  But often I’d say that helping people out of dire straights does help cause lasting change, to some extent.  Poverty is often a self-reinforcing cycle; people are kept poor in part by all of the shitty things that happen to you only when you’re poor, like not being able to pay your bail, or not being able to get adequate healthcare.

And one thing that EA does is to examine the results of the help they’re doing, from immediate effects to outcomes years later.  The bridge that soon collapses would not be considered effective.  The EA book Doing Good Better starts out with the example of the PlayPump, something that was touted in the media and installed in many villages but ended up not actually being used, and basically being worthless.  EA tries to avoid this kind of outcome.  E.g. GiveWell’s recommendation for the Against Malaria Foundation lists as one of its justifications:

  • Strong processes for ensuring that nets reach their intended recipients and monitoring whether they are used over the long-term.

(See the “Does It Work?” section of this and other GiveWell recs for many more details, including details on the actual life impacts on recipients.)

I think it’s too pessimistic to say that giving money simply never has lasting effects, and EA is interested specifically in identifying the cases in which it does.  It is possible that there are even better things to do than this, but that is always possible.  I am not looking for the best possible action.  I’m just looking to help some people in dire straits (and make sure that I am actually helping them).

lostpuntinentofalantis:

nostalgebraist:

That earlier conversation cleared up my own feelings about EA a bit, and they are as follows:

I give to charity – and to “effective” charities specifically – not because of any abstract argument I have been given, nor out of a feeling of moral guilt, but simply because once it was specifically pointed out to me that I could, it seemed like something I wanted to do.  It is consistent with the values I already have.

I do it because it is something I “should” do, but it isn’t the sort of “should” that comes from some ethical theory baring mercilessly down on me.  It’s the kind of “should” that makes me, during everyday life, actively want to help people.  Which is an impulse I have – not an infinitely compelling one, not one to which I will sacrifice all comfort and beauty and all other good things, but still a natural impulse.  Not something alien bolted on from the outside.

And the reason I want EA to spread is not that I think I have my hands on the correct ethical theory and want people to follow its dictates, their own values be damned.  My guess is that EA-like giving is – as it was with me – already consistent with many people’s values.  In which case telling them about EA would be less like guilting them into doing good, and more like making them aware that an action, of the sort they like to do, is available to them.

Imagine that it is discovered that whenever anyone says the world “bagel,” there was a 1/1000 chance that some randomly chosen person with cancer would be completely cured.  What would happen?  Certainly, this would precipitate some guilt crises – some people would feel bad about doing anything except saying the word bagel over and over again, and some people would argue that this feeling was right.  But the main reaction, I think, would be joy, and not just for cancer sufferers and those who know them.  People would think: my God, I can help someone so much, just by doing something so easy!  They would say “bagel” again and again with glee – not to the point of destroying their lives, but often, when they get a spare moment.  Such a complicated world, and such a simple way to help people so much!  Bagel!  Such a shining piece of unalloyed good in a very alloyed world!  And what a way to make even our own little lives brighten the world!

We would not be quite so happy if, instead, saying “bagel” merely had a 1/1000 chance of curing a less serious chronic illness – say, Tourette Syndrome (which I have, so I am allowed to say this).  In EA, this natural difference in feelings leads to cause prioritization, and puts the “E” in “EA.”  It is not some strange alien philosophical thing.  It is a normal aspect of our sentiments.


I find myself frustrated both by a lot of the EA movement and by a lot of anti-EA sentiment.  The EA movement tends to focus on the more philosophical, controversial, guilt-inducing aspects of the issue – the equivalent of telling people about the bagel thing by haranguing them about how awful it is to do anything but devote your life to maximal bagel-saying.  Anti-EA writing tends to focus entirely on opposition to those philosophical claims, which is fine in itself but pushes people away from discovering how wonderful a thing it can be to donate to effective charities.  It’s the equivalent of writing an article about these weird people who want you to do nothing all day but say “bagel,” and the problematic moral axioms involved, without ever mentioning the fact that saying “bagel” is magic.

The reason I want the EA movement to exist is that I want more people to discover this new action that is consistent with their values.  I don’t literally mean that people don’t know donation is possible, but in my experience it takes some initial push to make them realize that they could be doing it right now.  Many non-religious people, including me, grow up never thinking about it as a possibility, because no one around them talks about it.  Having a community of people doing it, as in religion, also helps – which is something the EA movement has the potential to do.

But the EA movement is certainly not ideal at selling itself for this purpose.  I wish it had nothing to do with Peter Singer, with his “really we should all just be ascetics” extreme utilitarianism and his ill-informed views about disabled people.  I wish it were less about “really we should all just be ascetics” in general.  Forget about asceticism and just focus on communicating the very simple fact that because of the declining marginal utility of wealth (among other things), you can help other people in marvelous ways at extremely low cost to yourself.  This should be a cause for joy.  Bagel!


Since I only really care about the EA movement as a vehicle for making people aware that they can give and that they probably already want to, arguments over EA often look strange to me.  Little or no attention is paid to whether the critics themselves donate, or whether EA caused them to donate.  Notice, in the article I linked this morning, this astonishing buried lede:

The utilitarian was no longer a theoretical construction to do dialectical battle with; he was knocking at the door armed with pamphlets, asking me to sign away 10 percent of my income (I was happy to oblige) and, in the seminar room, claiming authority over how I was to live (which I respectfully declined to concede to him).

Reading on it is clear that the author is not on board with the notion of effectiveness.  But it sounds like the EAs got him to donate.

This is a very strange situation: the fact that the author was awakened to a new way of doing good in the world is relegated to a sidenote (literally in parentheses), while he writes many paragraphs about the problems with the people who thus awakened him.

A man came to the door and told me about the bagel trick.  What is the bagel trick, you ask?  Unimportant, although I will note that I now practice it regularly.  What is important is that this man’s philosophy is wrong and you should not listen to him.

What the hell is going on here?


You may have noticed that I don’t actually seem to be a utilitarian.  If I was, I would be much less concerned with the absolute numbers of people donating and more concerned with the quantity of donations.  I might be much more interested in strategies like “earning to give,” which are weird and scary in ways that push people away from EA, but which may have the potential to create more donations, overall, even when you take that pushing-away into account.

I’m not sure if I’m a utilitarian, but I think a utilitarian could still bear with me here.  What I don’t want is for the EA movement to wither away into a strange, nerdy footnote that leaves most people cold.  What I want is for it to flourish into an overall secular culture of giving that can engage a wide range of people.  (Peter Singer wants this too, although I’m not sure he is effective at achieving it.)  In the short term, yes, a few high-paid “earners-to-give” can numerically outweigh a bunch of well-meaning but lower-earning donors.  In the long run, if we actually create a culture of giving, the number of EAs who just happen to work high-paying jobs will far numerically outweigh current earners-to-give, without anyone even having to adjust their career path.

EA, my EA, is a very normal sort of thing, one with broad appeal, and I hope it will be adopted very widely.  I want a world where ordinary high-paid managers follow GiveWell recommendations because this is a normal thing for people to do.  If we normalize giving – even those of us who can’t give much – the sheer masses of well-off normal people who give will far exceed anything that a small number of utilitarian nerds can do alone.

The obvious conclusion to draw here is that people don’t actually care about doing good in the world, so it’s misguided to try and even normalize the culture of giving. Sure they like to talk about and conspicuously signal that they care about poor people and oh no the inequality but…

How does voluntourism even become in a thing if they spend even a moment thinking about the costs and benefits? If people applied the same reasoning that they applied to, say deciding which movie to watch this evening, this institution would not have formed at all.

Witness the actual style of giving: essentially ostentatious, legalized begging on the streets or the vanity BUY YOUR OWN OVERPRICED VACATION auctions for rich people, or, the fact that it’s considered gauche to talk about how much money you donated. The linked article being more about self actualization rather than anything else should be a pretty clear signal: NO ONE CARES.

Most people think about charity in terms of what it can do for making themselves more impressive; why would you want to mar that with all of this bullshit mathy “effective” stuff that is just going to get in the way of showing off?

I’d like to be proven wrong here; what’s wrong with my view of charity’s primary purpose being a way for middle-high class westerners to brag?

What seems wrong to me with your view, on the basis of my personal experience, is that statistics say a lot of people already give to charity, yet I almost never hear about it.

I’ve looked around for statistics on what fraction of people give (particularly religious vs. non-religious) and the numbers vary a frustrating amount, but when I was looking around the numbers tended to always be above 50% for Americans, even for non-religious Americans.  (A figure I see a lot is the one given here, which has 65% of religious Americans giving and 56% of non-religious Americans giving.)

And yet I basically never hear anyone talk about the fact that they give to charity, except for EAs.  (In the case of EAs I don’t think it’s that they’re bragging, it’s because they have a culture of discussing what the best choices of charities are.)  With a few exceptions, I have no idea whether anyone I know gives to charity, although statistically I’m sure some of them do.  (Admittedly, I mostly know young people and older people are much more likely to give, but plenty of young people still give, according to the statistics I’ve seen.)

I agree that voluntourism is silly, but even if voluntourists are being counted as charitable givers, there still just aren’t enough of them to swamp the statistics (”more than 1.6 million” says this article).  Obviously all these stats are rough and I would like to do them more carefully at some point.

Who are all of these people giving to (non-religious) organizations and not talking about it?  I suspect some of them just ran into canvassers (although I don’t have statistics on this); this is not ideal for various reasons, but I think it indicates some interest in helping.  (You can just turn the canvasser down, after all.)  And, as I’ve said, these people generally don’t go on to talk about the donations they’re making.

What this looks like to me is some interest in donation combined with a complete detachment from any sort of culture of giving, so that people are willing to donate but tend to keep quiet about how they’re donating or how they could be doing it better, and tend to get pushed around by the sales pitches of individual charities rather than doing more careful research.

(via lostpuntinentofalantis)