Install Theme

waystatus:

nostalgebraist:

marcusseldon:

nostalgebraist:

Chin-stroking anti-EA article: “but this ignores that morality is not just some bullying voice from the heavens, it is part of our own quest to self-actualize as the unique humans beings with particular cares and bonds we are”

Me: “yes, and it’s a lot harder to self-actualize when you have malaria, and sometimes one of your particular cares is helping out other people with their own quests”

The chin-stroking person is right: if you’re a real utilitarian then thee is no room to self-actualize and have particularistic attachments and be a morally good person. The only good act under utilitarianism that which maximizes utility. Self-actualizing, having human relationships, etc. instead of working a second job to donate more money to AMF is bad under utilitarianism. Under utilitarianism, there are no good people alive and never have been. This is not uncharitable, this is actually what utilitarianism is.

Which is why I believe there is no such thing as a sincere utilitarian, not even Peter Singer, who spends/spent a great deal of money for elderly care for his mother instead of donating it to charity.

Note, though, that I’m talking about utilitarianism (which EA endorses as far as I”m aware), not consequentialism more broadly. Some EAs are not utilitarians, but most seem to identify that way.

This isn’t utilitarianism as I understand it.  For one thing, the act that maximizes utility is the best act, not the only good act.  It would be the only good act if we define “good act” as “the best one possible under the circumstances,” but this seems counter-productive, for exactly the reasons you give, and in practice I don’t see utilitarians make this claim very often (if ever?).

But, and more importantly, we live under various constraints: living a life of constant toil without particularistic attachments is extremely stressful and unpleasant in a way that will lower our cognitive abilities, make it harder to do any given job, and make it harder to morally evaluate the further tradeoffs we will face.  (It’s a common experience to be involved in a situation that is obviously awful, perhaps morally, in retrospect, but in which one was so over-worked or sleep-deprived that this wasn’t clear in the moment.)

There are some particular examples in which this becomes obvious: even a utilitarian moral saint is not going to take an extra night job if would mean they literally have no time to sleep, ever.  Less clear-cut but more important is the fact that most people (I think) find it easier to be productive workers, and to make moral tradeoffs, when they have fulfilling human contacts and various other “particulars.”

But EA only makes sense if you want to do the best thing and not merely a good thing. Why spend all that money on malaria when you can also do some amount of good by giving much less money to homeless people nearby? If you’re happy as long as you’re doing any good thing there’s no particular reason to care about helping people in the most effective manner.

But if you do care about doing the best thing, or even the best possible thing, then that does imply that instead of caring about your own relationships you should donate more money to malaria. You should, specifically, do things to stop malaria until the point where doing those things causes you to stop doing things to stop malaria. Where this point is is different for different people, but I’m quite sure most EAs aren’t at it.

My own motivations for EA aren’t these ones, and in general I think these kinds of arguments for EA create needless guilt and confusion without leading to more effective action (and probably leading to less).  This isn’t to say that the EA movement doesn’t make this argument – it does, sometimes – but that there is a version of EA, espoused by some actual EAs, that doesn’t need it.

My version of EA is more like this: one of the things I care about is helping people through unusually terrible circumstances, like getting malaria or not being able to pay bail on a misdemeanor when they would show up for court anyway.  This isn’t the only thing I care about, and even if it were, I wouldn’t care about it to the exclusion of my mental health etc. because that’s self-defeating (cf. the extreme example of taking on enough jobs that you have no time to ever sleep).

But within my pre-existing set of cares, cause prioritization already makes sense, without needing some particular fetish for doing the absolute best thing.  All I need is to care more about helping people when they are in more dire straits, which I already do.  If I’m setting aside some fraction of my budget for helping people in dire straits, I’d prefer to use it to help people in as dire straits as possible.

This is not some strange philosophical add-on to my emotions/intuitions, but the same sort of thinking I use everywhere else in my life – for instance, if one of my friends is going through a tough breakup and needs a shoulder to cry on, and another one is doing a home improvement project they can do themselves but would be easier with a helping hand, I’m going to spend my time with the former and not with the latter (who will probably understand and agree, if I explain the situation).

One could propose that the “logical conclusion” of this is to spend all my time helping people in the direst straits possible, such that I should ignore both friends if I could do something else with the time that would help people with malaria (by some “equal or greater amount”).  But that, unlike any of the above, isn’t emotionally natural for me at all, which is a real difference.

(via waystatus)

  1. nocturnaltherapist reblogged this from th4nkyoub3n and added:
    I have no clue what EA means I thought you were talking about the publishing company.
  2. th4nkyoub3n reblogged this from maxknightley
  3. dataandphilosophy reblogged this from robertskmiles and added:
    I don’t know a name for that position, but I suspect that someone has come up with a name beore. I’ll ask.
  4. jadagul reblogged this from marcusseldon and added:
    That’s what I’d thought but Wikipedia, IEP, and SEP all describe it as a theory of the good first.
  5. marcusseldon reblogged this from jadagul and added:
    Yeah, this conflation happens. It’s the difference between axiology (theory of goodness) and normative ethical theory....
  6. robertskmiles reblogged this from dataandphilosophy and added:
    I think my position places the threshold at zero, is there a name for that? Like, it’s morally obligatory for your life...
  7. ghostofasecretary reblogged this from dataandphilosophy
  8. eccentric-opinion reblogged this from nostalgebraist and added:
    Utilitarianism ranks actions from best to worst, but holds that the right act (i.e. the one you ought to do) is the best...
  9. moralitybog reblogged this from nostalgebraist and added:
    For the homeless population and preventing deaths I think a more apt comparison would be the cost of warming shelters in...
  10. epistemic-horror reblogged this from nostalgebraist and added:
    (the answer of course is to STOP USING UTILITARIAN ARGUMENTS AND NOTHING ELSE. you can get to EA from such weaksauce...
  11. nostalgebraist posted this