Install Theme

wirehead-wannabe:

As much as I agree with the assessment that EA needs to be broader, more flexible, and more practical than it is now, I really dislike the idea of “throwing anyone out” or getting rid of the not-geek-not-autism wing of it. I’m fine with creating outreach programs or making GiveWell the respectable low-MIRI face of it, but I really viscerally hate the idea of people taking away spaces where I can be my weird Ravenclaw primary not-geek-not-autism self. Again, competing interest might be such that we should encourage greater division of spaces, but please don’t get rid of the aspects that made it what it is in the first place.

Is EA really supposed to be a “space,” though?  Or is it just meant to be effective?

In the hypothetical situation where EA goes much more mainstream, there is nothing stopping you from being an EA while also being your weird self and hanging out with other similar people.  It’s just that a movement centered around a certain kind of ethical action probably shouldn’t do double duty as a social club for a very particular sort of person.

You can be a vegan without feeling like you fit in in “vegan spaces,” or wanting to; all you have to do is not eat animal products.  If I were an ethical vegan, I’d want to prioritize spreading veganism over preserving an existing vegan subculture, because that subculture can always grow and adapt to center around things that aren’t also ethical action movements.

Ethical action movements are things we want to be as mainstream as possible.

That earlier conversation cleared up my own feelings about EA a bit, and they are as follows:

I give to charity – and to “effective” charities specifically – not because of any abstract argument I have been given, nor out of a feeling of moral guilt, but simply because once it was specifically pointed out to me that I could, it seemed like something I wanted to do.  It is consistent with the values I already have.

I do it because it is something I “should” do, but it isn’t the sort of “should” that comes from some ethical theory baring mercilessly down on me.  It’s the kind of “should” that makes me, during everyday life, actively want to help people.  Which is an impulse I have – not an infinitely compelling one, not one to which I will sacrifice all comfort and beauty and all other good things, but still a natural impulse.  Not something alien bolted on from the outside.

And the reason I want EA to spread is not that I think I have my hands on the correct ethical theory and want people to follow its dictates, their own values be damned.  My guess is that EA-like giving is – as it was with me – already consistent with many people’s values.  In which case telling them about EA would be less like guilting them into doing good, and more like making them aware that an action, of the sort they like to do, is available to them.

Imagine that it is discovered that whenever anyone says the world “bagel,” there was a 1/1000 chance that some randomly chosen person with cancer would be completely cured.  What would happen?  Certainly, this would precipitate some guilt crises – some people would feel bad about doing anything except saying the word bagel over and over again, and some people would argue that this feeling was right.  But the main reaction, I think, would be joy, and not just for cancer sufferers and those who know them.  People would think: my God, I can help someone so much, just by doing something so easy!  They would say “bagel” again and again with glee – not to the point of destroying their lives, but often, when they get a spare moment.  Such a complicated world, and such a simple way to help people so much!  Bagel!  Such a shining piece of unalloyed good in a very alloyed world!  And what a way to make even our own little lives brighten the world!

We would not be quite so happy if, instead, saying “bagel” merely had a 1/1000 chance of curing a less serious chronic illness – say, Tourette Syndrome (which I have, so I am allowed to say this).  In EA, this natural difference in feelings leads to cause prioritization, and puts the “E” in “EA.”  It is not some strange alien philosophical thing.  It is a normal aspect of our sentiments.


I find myself frustrated both by a lot of the EA movement and by a lot of anti-EA sentiment.  The EA movement tends to focus on the more philosophical, controversial, guilt-inducing aspects of the issue – the equivalent of telling people about the bagel thing by haranguing them about how awful it is to do anything but devote your life to maximal bagel-saying.  Anti-EA writing tends to focus entirely on opposition to those philosophical claims, which is fine in itself but pushes people away from discovering how wonderful a thing it can be to donate to effective charities.  It’s the equivalent of writing an article about these weird people who want you to do nothing all day but say “bagel,” and the problematic moral axioms involved, without ever mentioning the fact that saying “bagel” is magic.

The reason I want the EA movement to exist is that I want more people to discover this new action that is consistent with their values.  I don’t literally mean that people don’t know donation is possible, but in my experience it takes some initial push to make them realize that they could be doing it right now.  Many non-religious people, including me, grow up never thinking about it as a possibility, because no one around them talks about it.  Having a community of people doing it, as in religion, also helps – which is something the EA movement has the potential to do.

But the EA movement is certainly not ideal at selling itself for this purpose.  I wish it had nothing to do with Peter Singer, with his “really we should all just be ascetics” extreme utilitarianism and his ill-informed views about disabled people.  I wish it were less about “really we should all just be ascetics” in general.  Forget about asceticism and just focus on communicating the very simple fact that because of the declining marginal utility of wealth (among other things), you can help other people in marvelous ways at extremely low cost to yourself.  This should be a cause for joy.  Bagel!


Since I only really care about the EA movement as a vehicle for making people aware that they can give and that they probably already want to, arguments over EA often look strange to me.  Little or no attention is paid to whether the critics themselves donate, or whether EA caused them to donate.  Notice, in the article I linked this morning, this astonishing buried lede:

The utilitarian was no longer a theoretical construction to do dialectical battle with; he was knocking at the door armed with pamphlets, asking me to sign away 10 percent of my income (I was happy to oblige) and, in the seminar room, claiming authority over how I was to live (which I respectfully declined to concede to him).

Reading on it is clear that the author is not on board with the notion of effectiveness.  But it sounds like the EAs got him to donate.

This is a very strange situation: the fact that the author was awakened to a new way of doing good in the world is relegated to a sidenote (literally in parentheses), while he writes many paragraphs about the problems with the people who thus awakened him.

A man came to the door and told me about the bagel trick.  What is the bagel trick, you ask?  Unimportant, although I will note that I now practice it regularly.  What is important is that this man’s philosophy is wrong and you should not listen to him.

What the hell is going on here?


You may have noticed that I don’t actually seem to be a utilitarian.  If I was, I would be much less concerned with the absolute numbers of people donating and more concerned with the quantity of donations.  I might be much more interested in strategies like “earning to give,” which are weird and scary in ways that push people away from EA, but which may have the potential to create more donations, overall, even when you take that pushing-away into account.

I’m not sure if I’m a utilitarian, but I think a utilitarian could still bear with me here.  What I don’t want is for the EA movement to wither away into a strange, nerdy footnote that leaves most people cold.  What I want is for it to flourish into an overall secular culture of giving that can engage a wide range of people.  (Peter Singer wants this too, although I’m not sure he is effective at achieving it.)  In the short term, yes, a few high-paid “earners-to-give” can numerically outweigh a bunch of well-meaning but lower-earning donors.  In the long run, if we actually create a culture of giving, the number of EAs who just happen to work high-paying jobs will far numerically outweigh current earners-to-give, without anyone even having to adjust their career path.

EA, my EA, is a very normal sort of thing, one with broad appeal, and I hope it will be adopted very widely.  I want a world where ordinary high-paid managers follow GiveWell recommendations because this is a normal thing for people to do.  If we normalize giving – even those of us who can’t give much – the sheer masses of well-off normal people who give will far exceed anything that a small number of utilitarian nerds can do alone.

indiveren:

ozymandias271:

nostalgebraist:

waystatus:

nostalgebraist:

marcusseldon:

nostalgebraist:

Chin-stroking anti-EA article: “but this ignores that morality is not just some bullying voice from the heavens, it is part of our own quest to self-actualize as the unique humans beings with particular cares and bonds we are”

Me: “yes, and it’s a lot harder to self-actualize when you have malaria, and sometimes one of your particular cares is helping out other people with their own quests”

The chin-stroking person is right: if you’re a real utilitarian then thee is no room to self-actualize and have particularistic attachments and be a morally good person. The only good act under utilitarianism that which maximizes utility. Self-actualizing, having human relationships, etc. instead of working a second job to donate more money to AMF is bad under utilitarianism. Under utilitarianism, there are no good people alive and never have been. This is not uncharitable, this is actually what utilitarianism is.

Which is why I believe there is no such thing as a sincere utilitarian, not even Peter Singer, who spends/spent a great deal of money for elderly care for his mother instead of donating it to charity.

Note, though, that I’m talking about utilitarianism (which EA endorses as far as I”m aware), not consequentialism more broadly. Some EAs are not utilitarians, but most seem to identify that way.

This isn’t utilitarianism as I understand it.  For one thing, the act that maximizes utility is the best act, not the only good act.  It would be the only good act if we define “good act” as “the best one possible under the circumstances,” but this seems counter-productive, for exactly the reasons you give, and in practice I don’t see utilitarians make this claim very often (if ever?).

But, and more importantly, we live under various constraints: living a life of constant toil without particularistic attachments is extremely stressful and unpleasant in a way that will lower our cognitive abilities, make it harder to do any given job, and make it harder to morally evaluate the further tradeoffs we will face.  (It’s a common experience to be involved in a situation that is obviously awful, perhaps morally, in retrospect, but in which one was so over-worked or sleep-deprived that this wasn’t clear in the moment.)

There are some particular examples in which this becomes obvious: even a utilitarian moral saint is not going to take an extra night job if would mean they literally have no time to sleep, ever.  Less clear-cut but more important is the fact that most people (I think) find it easier to be productive workers, and to make moral tradeoffs, when they have fulfilling human contacts and various other “particulars.”

But EA only makes sense if you want to do the best thing and not merely a good thing. Why spend all that money on malaria when you can also do some amount of good by giving much less money to homeless people nearby? If you’re happy as long as you’re doing any good thing there’s no particular reason to care about helping people in the most effective manner.

But if you do care about doing the best thing, or even the best possible thing, then that does imply that instead of caring about your own relationships you should donate more money to malaria. You should, specifically, do things to stop malaria until the point where doing those things causes you to stop doing things to stop malaria. Where this point is is different for different people, but I’m quite sure most EAs aren’t at it.

My own motivations for EA aren’t these ones, and in general I think these kinds of arguments for EA create needless guilt and confusion without leading to more effective action (and probably leading to less).  This isn’t to say that the EA movement doesn’t make this argument – it does, sometimes – but that there is a version of EA, espoused by some actual EAs, that doesn’t need it.

My version of EA is more like this: one of the things I care about is helping people through unusually terrible circumstances, like getting malaria or not being able to pay bail on a misdemeanor when they would show up for court anyway.  This isn’t the only thing I care about, and even if it were, I wouldn’t care about it to the exclusion of my mental health etc. because that’s self-defeating (cf. the extreme example of taking on enough jobs that you have no time to ever sleep).

But within my pre-existing set of cares, cause prioritization already makes sense, without needing some particular fetish for doing the absolute best thing.  All I need is to care more about helping people when they are in more dire straits, which I already do.  If I’m setting aside some fraction of my budget for helping people in dire straits, I’d prefer to use it to help people in as dire straits as possible.

This is not some strange philosophical add-on to my emotions/intuitions, but the same sort of thinking I use everywhere else in my life – for instance, if one of my friends is going through a tough breakup and needs a shoulder to cry on, and another one is doing a home improvement project they can do themselves but would be easier with a helping hand, I’m going to spend my time with the former and not with the latter (who will probably understand and agree, if I explain the situation).

One could propose that the “logical conclusion” of this is to spend all my time helping people in the direst straits possible, such that I should ignore both friends if I could do something else with the time that would help people with malaria (by some “equal or greater amount”).  But that, unlike any of the above, isn’t emotionally natural for me at all, which is a real difference.

A possible analogy: consider a writer who wants to write as many of the best novels possible. She probably puts less effort into her writing than she could; she isn’t at the point where any more time spent writing would make her books worse. But it is silly to object to her “you went out to dinner with your husband instead of putting in three more hours researching Icelandic sheep herding! You might as well just write Dan Brown books.”

In pretty much any area of life, we accept that it is possible to (a) care about quality and (b) not put in the absolute theoretical maximum amount of effort you could to the exclusion of everything else you care about. This is also true of EA.

Furthermore, donating to AMF is an extremely cheap way of fulfilling my values, as opposed to giving to nearby homeless people. $3000 is enough to support a homeless person for about a hundred days, but it purchases more than fifty years of life if used for malaria nets. So donating the money to AMF instead of homeless people is equivalent to raising your donations from $1000 to $150,000– and significantly less painful.  

Ok sorry, but I want to pick on details to clear something up, this went from the first post having it said as “chin stroking anti-EA” person the second person immediately jumps from EA to utilitarianism as the term being discussed. From that point out within this entire thing, as far as I can see, EA and utilitarianism is used interchangeably.  Does this mean you can’t take utilitarianism as a moral stance unless you are also actively EA?  … also if utilitarianism actually is supposed to be based on actively doing the best/good act you can do then what is the proper term for someone who simply takes the moral ground that the morally good thing is for people to feel happyness,  causing other people pain is morally bad, and ect. ? 

I see what you mean here.  I think the equivocation is being made here because the particular reason the article gave for rejecting EA – as at least hinted at in my caricature in the OP – involved rejecting utilitarianism.  It isn’t “you can’t be a utilitarian without being an EA,” but “people tell me to be an EA using utilitarian arguments, but what if I’m not a utilitarian?”

For full clarity, the article was this one

(via devils-advocate-indiveren-deact)

ozymandias271:

nostalgebraist:

waystatus:

nostalgebraist:

marcusseldon:

nostalgebraist:

Chin-stroking anti-EA article: “but this ignores that morality is not just some bullying voice from the heavens, it is part of our own quest to self-actualize as the unique humans beings with particular cares and bonds we are”

Me: “yes, and it’s a lot harder to self-actualize when you have malaria, and sometimes one of your particular cares is helping out other people with their own quests”

The chin-stroking person is right: if you’re a real utilitarian then thee is no room to self-actualize and have particularistic attachments and be a morally good person. The only good act under utilitarianism that which maximizes utility. Self-actualizing, having human relationships, etc. instead of working a second job to donate more money to AMF is bad under utilitarianism. Under utilitarianism, there are no good people alive and never have been. This is not uncharitable, this is actually what utilitarianism is.

Which is why I believe there is no such thing as a sincere utilitarian, not even Peter Singer, who spends/spent a great deal of money for elderly care for his mother instead of donating it to charity.

Note, though, that I’m talking about utilitarianism (which EA endorses as far as I”m aware), not consequentialism more broadly. Some EAs are not utilitarians, but most seem to identify that way.

This isn’t utilitarianism as I understand it.  For one thing, the act that maximizes utility is the best act, not the only good act.  It would be the only good act if we define “good act” as “the best one possible under the circumstances,” but this seems counter-productive, for exactly the reasons you give, and in practice I don’t see utilitarians make this claim very often (if ever?).

But, and more importantly, we live under various constraints: living a life of constant toil without particularistic attachments is extremely stressful and unpleasant in a way that will lower our cognitive abilities, make it harder to do any given job, and make it harder to morally evaluate the further tradeoffs we will face.  (It’s a common experience to be involved in a situation that is obviously awful, perhaps morally, in retrospect, but in which one was so over-worked or sleep-deprived that this wasn’t clear in the moment.)

There are some particular examples in which this becomes obvious: even a utilitarian moral saint is not going to take an extra night job if would mean they literally have no time to sleep, ever.  Less clear-cut but more important is the fact that most people (I think) find it easier to be productive workers, and to make moral tradeoffs, when they have fulfilling human contacts and various other “particulars.”

But EA only makes sense if you want to do the best thing and not merely a good thing. Why spend all that money on malaria when you can also do some amount of good by giving much less money to homeless people nearby? If you’re happy as long as you’re doing any good thing there’s no particular reason to care about helping people in the most effective manner.

But if you do care about doing the best thing, or even the best possible thing, then that does imply that instead of caring about your own relationships you should donate more money to malaria. You should, specifically, do things to stop malaria until the point where doing those things causes you to stop doing things to stop malaria. Where this point is is different for different people, but I’m quite sure most EAs aren’t at it.

My own motivations for EA aren’t these ones, and in general I think these kinds of arguments for EA create needless guilt and confusion without leading to more effective action (and probably leading to less).  This isn’t to say that the EA movement doesn’t make this argument – it does, sometimes – but that there is a version of EA, espoused by some actual EAs, that doesn’t need it.

My version of EA is more like this: one of the things I care about is helping people through unusually terrible circumstances, like getting malaria or not being able to pay bail on a misdemeanor when they would show up for court anyway.  This isn’t the only thing I care about, and even if it were, I wouldn’t care about it to the exclusion of my mental health etc. because that’s self-defeating (cf. the extreme example of taking on enough jobs that you have no time to ever sleep).

But within my pre-existing set of cares, cause prioritization already makes sense, without needing some particular fetish for doing the absolute best thing.  All I need is to care more about helping people when they are in more dire straits, which I already do.  If I’m setting aside some fraction of my budget for helping people in dire straits, I’d prefer to use it to help people in as dire straits as possible.

This is not some strange philosophical add-on to my emotions/intuitions, but the same sort of thinking I use everywhere else in my life – for instance, if one of my friends is going through a tough breakup and needs a shoulder to cry on, and another one is doing a home improvement project they can do themselves but would be easier with a helping hand, I’m going to spend my time with the former and not with the latter (who will probably understand and agree, if I explain the situation).

One could propose that the “logical conclusion” of this is to spend all my time helping people in the direst straits possible, such that I should ignore both friends if I could do something else with the time that would help people with malaria (by some “equal or greater amount”).  But that, unlike any of the above, isn’t emotionally natural for me at all, which is a real difference.

A possible analogy: consider a writer who wants to write as many of the best novels possible. She probably puts less effort into her writing than she could; she isn’t at the point where any more time spent writing would make her books worse. But it is silly to object to her “you went out to dinner with your husband instead of putting in three more hours researching Icelandic sheep herding! You might as well just write Dan Brown books.”

In pretty much any area of life, we accept that it is possible to (a) care about quality and (b) not put in the absolute theoretical maximum amount of effort you could to the exclusion of everything else you care about. This is also true of EA.

Furthermore, donating to AMF is an extremely cheap way of fulfilling my values, as opposed to giving to nearby homeless people. $3000 is enough to support a homeless person for about a hundred days, but it purchases more than fifty years of life if used for malaria nets. So donating the money to AMF instead of homeless people is equivalent to raising your donations from $1000 to $150,000– and significantly less painful.  

(via bpd-dylan-hall-deactivated20190)

waystatus:

nostalgebraist:

marcusseldon:

nostalgebraist:

Chin-stroking anti-EA article: “but this ignores that morality is not just some bullying voice from the heavens, it is part of our own quest to self-actualize as the unique humans beings with particular cares and bonds we are”

Me: “yes, and it’s a lot harder to self-actualize when you have malaria, and sometimes one of your particular cares is helping out other people with their own quests”

The chin-stroking person is right: if you’re a real utilitarian then thee is no room to self-actualize and have particularistic attachments and be a morally good person. The only good act under utilitarianism that which maximizes utility. Self-actualizing, having human relationships, etc. instead of working a second job to donate more money to AMF is bad under utilitarianism. Under utilitarianism, there are no good people alive and never have been. This is not uncharitable, this is actually what utilitarianism is.

Which is why I believe there is no such thing as a sincere utilitarian, not even Peter Singer, who spends/spent a great deal of money for elderly care for his mother instead of donating it to charity.

Note, though, that I’m talking about utilitarianism (which EA endorses as far as I”m aware), not consequentialism more broadly. Some EAs are not utilitarians, but most seem to identify that way.

This isn’t utilitarianism as I understand it.  For one thing, the act that maximizes utility is the best act, not the only good act.  It would be the only good act if we define “good act” as “the best one possible under the circumstances,” but this seems counter-productive, for exactly the reasons you give, and in practice I don’t see utilitarians make this claim very often (if ever?).

But, and more importantly, we live under various constraints: living a life of constant toil without particularistic attachments is extremely stressful and unpleasant in a way that will lower our cognitive abilities, make it harder to do any given job, and make it harder to morally evaluate the further tradeoffs we will face.  (It’s a common experience to be involved in a situation that is obviously awful, perhaps morally, in retrospect, but in which one was so over-worked or sleep-deprived that this wasn’t clear in the moment.)

There are some particular examples in which this becomes obvious: even a utilitarian moral saint is not going to take an extra night job if would mean they literally have no time to sleep, ever.  Less clear-cut but more important is the fact that most people (I think) find it easier to be productive workers, and to make moral tradeoffs, when they have fulfilling human contacts and various other “particulars.”

But EA only makes sense if you want to do the best thing and not merely a good thing. Why spend all that money on malaria when you can also do some amount of good by giving much less money to homeless people nearby? If you’re happy as long as you’re doing any good thing there’s no particular reason to care about helping people in the most effective manner.

But if you do care about doing the best thing, or even the best possible thing, then that does imply that instead of caring about your own relationships you should donate more money to malaria. You should, specifically, do things to stop malaria until the point where doing those things causes you to stop doing things to stop malaria. Where this point is is different for different people, but I’m quite sure most EAs aren’t at it.

My own motivations for EA aren’t these ones, and in general I think these kinds of arguments for EA create needless guilt and confusion without leading to more effective action (and probably leading to less).  This isn’t to say that the EA movement doesn’t make this argument – it does, sometimes – but that there is a version of EA, espoused by some actual EAs, that doesn’t need it.

My version of EA is more like this: one of the things I care about is helping people through unusually terrible circumstances, like getting malaria or not being able to pay bail on a misdemeanor when they would show up for court anyway.  This isn’t the only thing I care about, and even if it were, I wouldn’t care about it to the exclusion of my mental health etc. because that’s self-defeating (cf. the extreme example of taking on enough jobs that you have no time to ever sleep).

But within my pre-existing set of cares, cause prioritization already makes sense, without needing some particular fetish for doing the absolute best thing.  All I need is to care more about helping people when they are in more dire straits, which I already do.  If I’m setting aside some fraction of my budget for helping people in dire straits, I’d prefer to use it to help people in as dire straits as possible.

This is not some strange philosophical add-on to my emotions/intuitions, but the same sort of thinking I use everywhere else in my life – for instance, if one of my friends is going through a tough breakup and needs a shoulder to cry on, and another one is doing a home improvement project they can do themselves but would be easier with a helping hand, I’m going to spend my time with the former and not with the latter (who will probably understand and agree, if I explain the situation).

One could propose that the “logical conclusion” of this is to spend all my time helping people in the direst straits possible, such that I should ignore both friends if I could do something else with the time that would help people with malaria (by some “equal or greater amount”).  But that, unlike any of the above, isn’t emotionally natural for me at all, which is a real difference.

(via waystatus)

Chin-stroking anti-EA article: “but this ignores that morality is not just some bullying voice from the heavens, it is part of our own quest to self-actualize as the unique humans beings with particular cares and bonds we are”

Me: “yes, and it’s a lot harder to self-actualize when you have malaria, and sometimes one of your particular cares is helping out other people with their own quests”