Install Theme

fierceawakening:

oligopsonoia:

fierceawakening:

funereal-disease:

Something I’ve been thinking about today: no amount of declaring a joke “not funny” for political reasons will make it actually cease to be funny. 

Humor is a really primal thing. You can have the best, most thoughtful politics in the world and still find your funny bone tickled by horribly offensive shit. That doesn’t make you problematic. It makes you a human being with human neurology, which means what trips your laugh wire is pretty damn arbitrary and often not within your control.

Have you ever tried telling someone who’s losing it at an inappropriate time to shut up and stop laughing? It doesn’t work. That’s the human brain for you. You don’t have to enlighten yourself out of basic physical responses. 

This is exactly why the whole SJ emphasis on “stop finding *ist jokes funny!” baffled me even when I was a feminist.

I get that SOMETIMES a person’s sense of humor can reveal that they are bigoted, but I’m baffled by the assumption we can tell that about most people by what they find funny.

This model of the emotions (that is, your and Skye’s) where emotional responses are a purely independent variable and not the proper subject of moral investigation had a lot of appeal for me for a long time, but as I’ve worked with some CBT/DBT and also some engagement with some alternative takes on the topic (you might want to take a look at Nussbaum’s recent work on anger for instance) I’ve become convinced that it’s analytically inadequate and frequently counterproductive.

Emotions aren’t purely given experiences; our split-second reactions, rather, are both the beginnings of processes (which we can react to in a number of ways) and responsive to long-term habits, our implicit awareness of what is and isn’t socially acceptable, etc. Reactions like anger, humor, sexual arousal, and much else can be indulged in and given focus or not, and the routes we take there will affect our future dispositions. The point here is not that this is purely voluntary - it is not, as indeed most actions aren’t - but rather that the longer-term formation of habits is responsive to social sanction (or encouragement.)

(There are some further comments to be made on what the point of surpressing inappropriate jokes and so on, but I don’t have the energy to cover the inferential distance at this moment. Suffice to say that I think our ordinary moral language is too laced with metaethical individualism to be of much use without a lot of unpacking.)

This is interesting to me, because unless I’m misunderstanding you, I’ve actually found the exact opposite, that the sort of approach you are advocating has been the unuseful one. My feministy SJ days coincided with me being in grad school, and I was taking philosophy. So a fair few of the courses I was in talked about things like this, though you’ve probably read more Nussbaum than I, and it certainly sounds like more recently.

Because what I’ve found is that digging through your emotions and assuming they have political antecedents is not only a waste of time, but also self-damaging as well. It creates a kind of doubt which isn’t the freeing kind of doubt brought about by free inquiry, but the kind of destabilizing doubt closer to gaslighting:

These are my perceptions, but my theory tells me they are wrong. Therefore I cannot trust my perceptions, and must rely on others I deem more “woke” to perceive the world for me.

What I do think is valid is to think about whether your EXPRESSION of your feelings will hurt others. If you’re from a sheltered little town and everyone tells x jokes and no one is an x… then telling an x joke in front of one and seeing her cry should be a learning experience, not a time for defensiveness.

But “why did you find x jokes funny? Why, why, why?”

Because it seemed silly and cute and you’d never seen it harm anyone, so you didn’t know any better. Fucking duh.

In my experience (I mean, inside my own head), this kind of shift can happen much more naturally because experiencing something as “funny” and experiencing it as “sad/horrible” are largely mutually exclusive.  So if my environment makes me have a strong “that’s sad/horrible” response, I’m automatically going to have less of a “that’s funny” response.

By “experiencing as sad/horrible” I don’t just mean being aware that some particular bit of dark humor is about dark stuff.  I’m always aware of that.  It’s more like … okay, so I have certain writers/bloggers that I like to hateread because it makes me laugh.  They’re always people I disagree with on a lot of stuff but also usually people I find bombastic or otherwise ludicrous, so that reading them gives me this “can you believe this guy” humor experience.

But then, sometimes, they say something about their bad personal circumstances that makes me feel sorry for them – and perhaps guilty about laughing at stuff they wrote in bad times – and then I have to stop because I’m no longer laughing.  Or they go from saying things I merely disagree with to saying things that horrify me.  Sometimes Esther and I will hateread together, and there will be these moments, when we both start feeling sympathy for the blogger or worrying about whether they abuse their spouse or w/e, and that moment where it “becomes sad” is also, always, the moment where it “stops being funny.”

People will sometimes say “I’m laughing but I feel guilty for laughing,” and stuff like that, and I feel like that sometimes, but usually it’s at the exact transition point where I cross from one side of this gap to the other, and afterwards I’m just not laughing.  (Although, for whatever reason, the two feelings can easily coexist when I’m thinking about my own situation, just not about the situations of other people, real or fictional or abstract)

And really I think humor is always unstable in this way – or at least if it’s about anything with strong emotional associations.  There are some black comedies, say, that are entirely about really bad things happening to people, and I usually can’t enjoy them because I keep feeling sorry for the characters or thinking “what if that were me.”  (The book The Road to Wellville is one example.)  Or, on the “horrible” rather than “sad” side, there’s how I feel about Donald Trump: the things that are funny about him tend to also be the things that are scary about him, and thinking about Trump is like looking at a Necker cube, seeing one side, then suddenly the other, but never both at once.

A lot of humor is about stuff that you couldn’t possibly laugh about if it was happening to someone right in front of you.

So, sometimes I have stopped finding certain “offensive jokes” funny, not by suppressing my natural inclinations, but just because my environment caused me to think more frequently and vividly about the subject matter or cultural background of the joke.  So that when I hear it, my immediate, unexamined experience contains more “that’s horrible” and, consequently, less “that’s funny.”

(via almostcoralchaos)

napoleonchingon:

nostalgebraist:

baroquespiral:

wirehead-wannabe:

adzolotl:

argumate:

argumate:

fierceawakening:

the fact that bring me to life is a mocking meme really rustles my jimmies because i actually unironically like that song

so every time i see that meme, i feel like the entire internet is telling me i have poor taste worthy only of scorn

I WILL DEFEND OVERLY DRAMATIC SONGS TO THE DEATH IF I MUST

but yeah there are multiple ways of appreciating things

one can recognise the absurdity while still sincerely enjoying it

see also: sex

i only have sex ironically

Fucking hipsters

see also: life

imo New Sincerity should reverse into ideology-critique, though, not the other way around

Level 1 is recognizing the absurdity of sex while still sincerely enjoying it.  Level 2 is asking, wait, what makes it absurd again? what does that mean? and realizing the perception of “absurdity” is just an interpretation of its distance from a) an false ideal image of sex that serves patriarchy and consumerism b) norms of seriousness and dignity grounded in centuries of sexually repressed culture.  “recognizing the absurdity of sex while still sincerely enjoying it” would be an absurd statement to societies that sacralized it in a depth of physical detail our most “sex-positive” media are still afraid to broach without demure veils of irony

Level 1 is defending overly dramatic songs in spite of their absurdity; Level 2 is realizing “overly dramatic”, as applied to Evanescence and not over half of P4k indieprog for the past decade, just means “too enjoyable for the Protestant ethos”

I really like this, although I think I’m usually on some other (horizontally adjacent rather than higher/lower?) “level” where I think that “absurdity” and “lack of dignity/seriousness” are an important part of life itself and for that very reason there are good and bad ways to do them in art – ways that enhance the sense of synching up with the whole of life as it is lived vs. ways that detract from that sense

“Serious” as in “deeply moving art” conflicts with “serious” as in “dignified and mature” because deeply moving experiences IRL are rarely “dignified and mature” through and through – but then, if we’re using “deeply moving experiences IRL” as our benchmark, that still is a benchmark and one can fail to meet it in various ways, so it isn’t just “anything goes”

But even if there’s a wrong way to be undignified, being undignified in the wrong way isn’t a dealbreaker just in itself, the thing can still be enjoyable overall – which for me is the territory called “one can recognize the absurdity while still sincerely enjoying it”

(I really like Bring Me To Life btw)

Overly dramatic music and sex (at least sex in non-religious people tumblr-culture) is actually a pretty good analogy because any negatives with both of those are entirely about shame rather than guilt. Like it’s not expected that you’d listen to “Bring me to life” and feel bad about listening to it. In fact the mental image of someone who does that is itself comical. Instead, the societal expectation is that you should be ashamed of going on about how much you like “Bring me to Life” publically.

So I think analogizing this with the protestant work ethic or with old-school sexual taboos as @baroquespiral did is an “off-by-one-metalevel” error. The message of mocking Bring Me To Life is not in the content (”dramatic music is bad”), but in letting you know “there are publically acceptable opinions about things, and even if they’re wrong, you should be wary of contradicting them in full view”.

(full disclosure: I hate pretty much anything I’ve ever heard by Evanescence)

Ahhh yeah good stuff here

Sorry to get confessional out of nowhere but I think it’s an example that draws the distinction very clearly: I spent a lot of evenings in late 2010 crying while listening to this song, because I wanted that kind of deep emotional connection and specifically familial, “adult” kind of connection and felt (“knew”) I’d never have it bc I was too much of an eccentric outsider to ever be in a traditional familial role like that, etc. etc.

And the thing is, it’s not actually an “overly dramatic” song (it’s dramatic, but so’s the subject matter, inherently), or a bad song at all.  It’s just not “good” music in the sense of musical or lyrical innovation.  It’s completely transparent and it’s the kind of song you would expect about the thing it’s obviously about.  Hence why I might conceivably feel embarrassed (though I don’t) mentioning how much I like it – it’s not that people think liking this kind of music is bad (although some may), but that professing to like it is just sort of … boring? not bringing much to the table?  I don’t really know how to put it.

(Like, compare it to this, where there’s a certain nerd-hipster appeal to detailing why the music resonates with me – that’s “””interesting,””” in a way that talking about “Wonderland” isn’t; it’s the kind of content that might get you reading my blog in the first place, as opposed to the kind of content you care about because you already know me)

(via sungodsevenoclock)

plain-dealing-villain:

nostalgebraist:

plain-dealing-villain:

nostalgebraist:

Posting to say that I should stay out of the latest Bayes epistemology exchange bc I have nothing new to say about it, but also to link “Solvitur ambulando” again and reiterate that I don’t understand assigning/updating probabilities when you only have an incomplete, time-dependent picture of the sigma-algebra

It’s been looking increasingly likely that Thompson sampling is actually the correct approach to approximate for bounded agents. But it’s still Bayes in the limit, and doesn’t throw out terribly different conclusions.

I don’t understand how Thompson sampling helps here.  It still has a known and fixed sigma-algebra, just one with more dimensions.  Do you have a link to a resource on this?

No, but it keeps coming up that in situations where Bayesian induction is optimal for unbounded agents, bounded agents get better answers faster by using the Thompson sampling approach. (The two main ones I was thinking of are this paper on the grain of truth, from MIRI and the standard wisdom about running multi-armed bandit problems in A/B testing, where the Thompson approach is asymptotically the same but has much better constants.)

There hasn’t been anyone, AFAIK, who’s set out to show that Thompson variants of Bayesian inference are optimal for bounded agents in the same way that Bayesian inference is optimal for unbounded ones. But I’m increasingly expecting that it’s true, and is just waiting for us to discover the proof. This proof would probably deal with your issues, assuming it exists to be discovered.

I don’t think this is addressing my issue?  I’m not talking about whether or not Thompson sampling is good if you can do it, I’m talking about whether you can do it.

In a Bayesian approach to the multi-armed bandit, you know the probability space you are assigning probabilities on (each outcome consists of an expected payout value for all the bandits at once, and each event is just some set of those outcomes, and usually the bandit payouts are known to be independent so the event space is really simple).  The MIRI paper is quite complicated but seems to assume that the probability space is known (otherwise you couldn’t do Thompson sampling).

I don’t understand what you mean by “bounded” here.  The general sort of problem I am referring to is this: “you are trying to assign probabilities to individual events, but you don’t have complete information about the event space they belong to.”

(via jiskblr)

bpd-anon:

jumpingjacktrash:

roachpatrol:

ceruleancynic:

pinkrocksugar:

coelasquid:

writerlyn:

The most important writing lesson I ever learned was not in a screenwriting class, but a fiction class.

This was senior year of college.  Most of us had already been accepted into grad school of some sort. We felt powerful, we felt talented, and most of all, we felt artistic.

It was the advanced fiction workshop, and we did an entire round of workshops with everyone’s best stories, their most advanced work, their most polished pieces. It was very technical and, most of all, very artistic.

IE: They were boring pieces of pretentious crap.

Now the teacher was either a genius OR was tired of our shit, and decided to give us a challenge.  Flash fiction, he said. Write something as quickly as possible.  Make it stupid.  Make it not mean a thing, just be a quick little blast of words. 

And, of course, we all got stupid.  Little one and two pages of prose without the barriers that it must be good. Little flashes of characters, little bits of scenarios.

And they were electric.  All of them. So interesting, so vivid, not held back by the need to write important things or artistic things. 

One sticks in my mind even today.  The guys original piece was a thinky, thoughtful piece relating the breaking up of threesomes to volcanoes and uncontrolled eruptions that was just annoying to read. But his flash fiction was this three page bit about a homeless man who stole a truck full of coca cola and had to bribe people to drink the soda so he could return the cans to recycling so he could afford one night with the prostitute he loved.

It was funny, it was heartfelt, and it was so, so, so well written.

And just that one little bit of advice, the write something short and stupid, changed a ton of people’s writing styles for the better.

It was amazing. So go.  Go write something small.  Go write something that’s not artistic.  Go write something stupid. Go have fun.

The most useful piece of advice I got from my college profs when we were making our 4th year films was “People remember you if you make them laugh”.

All the creativity was ironed out of me in college. None of my writing profs would accept “genre” fiction. So when you’re told not to write genre (thus not about anything that interests you), and to write what you know, but you’re only like 23 and you don’t know shit, what’s left to write about? Like… Divorce? Maybe dogs? Idk. I just think excluding genre fiction when, largely, that’s what people like to read, is pretentious and snooty.

ALL OF THIS. 

I spent five years doing an MFA in which I was made to write litfic. I am not a litfic author. I have never been, and will never be, a litfic author, and trying to find some way of writing interesting shit that didn’t have any magic or spaceships or sardonic monsters in it was like pulling teeth. 

(I cannot read litfic either. It’s like wading through treacle to reach a prize I didn’t actually want in the first place. I know that a lot of people genuinely enjoy books and stories about ordinary people doing ordinary things in the real world, and I am just not one of them: I live in that world, I want to get out of it as often as I can.)

The idea that you must write only The Serious Fiction For Serious People Who Like To Be Serious About Things A Lot is poisonous. The idea that you must squash or extract or leach out all the things from your writing that you actually enjoy, in order to be taken seriously, is poisonous. 

Write what you love. Write what you want to write. If you’re in a program where you have to write stuff you do not love, then do it with the awareness that soon enough you will be freed from that obligation. And don’t stop.

That’s the most important piece of writing advice I was ever given, by a Newbery Award winner, when I was about ten or eleven years old: don’t stop

the Serious People With Serious Tastes aversion to genre fiction continues to baffle and horrify me. why would you take something as incredible as the human capacity to make meaning out of things and then strip out all the horror and magic and wonder? we live in an incredible age full of incredible things like nuclear bombs and robots on mars and computers in our pockets and zombie survival schools and renaissance faires and it’s so, so easy from here to look in any direction of the past or future or sideways and get jazzed as hell about what could happen. or could have happened. and we’re somehow, if we want to be taken seriously by assholes, not supposed to care about any of that. 

but like! wow! fuck that! fuck it. write cowboys on the moon. cyberpunk technocrat court intrigue. vampires falling out on different sides of the american civil war. a murder mystery with dragons. naiad vs dryad gang wars in tokyo. a torrid romance among dinosaurs on the moon. if anyone ever sat me down and told me to cut it out with the genre shlock, i’d headbutt ‘em. 

anyone who is against speculative fiction is against speculation. the vast majority of so-called litfic is simply daytime television tearjerkers with the happy ending removed, dressed up in semi-unreadable Beautiful Prose. it is masturbatory and dull. the exceptions are, for the most part, about or concurrent with great events or changes in society – wars, technological upheavals, and the like. places in history where people didn’t quite grok what the hell was happening, and told stories about it to try to figure it out. in other words, speculative fiction – about the real world.

i mean, have you actually read ‘moby dick’? it’s half high seas soap opera and half nerding out about whales, and ishmael/queequeg is canon. it’s weird and bloody and gross and sublime and shit does not stop from happening ever. the greatest fiction is great because it tells a hell of a story.

if you take the storytelling out of writing, you’re left with an academic excercise that will bore readers even more than it bored the writer.

I know there are people out there who enjoy litfic, but it’s so hard for me to wrap my head around that fact. Litfic is such utter sewage.

There’s a difference between “litfic” and “fiction that takes place in the real world as we understand it.”

Or, more accurately, “litfic” is a confusing word because sometimes it means “takes place in real world” and sometimes it means “the genre of prestigious, ‘artistic’ but not ‘too weird’ fiction that forms a certain contemporary publishing niche and is what you’ll find winning prizes like the Man Booker, generally.”

The latter kind of litfic usually takes place in “the real world,” but this is not mandatory.  Life After Life and A Tale For The Time Being are two recent (2013) examples of highly lauded litfic with fantastic elements.  There’s still a lot of ways to tell these books apart from “genre fiction” – some of it is branding and social networks, but it’s also frequently observed that many litfic authors haven’t read much genre and tend to reinvent tired tropes when they dip into the fantastic, etc.

IMO (and to my taste) the really important thing here is not “real world vs. speculation” but the fact that contemporary litfic is just a boring genre.  Even when it’s not about “ordinary people” or “ordinary things” (it often isn’t), its stylistic conventions tend to make it boring.  It’s important not to over-generalize from this minor historical fact and conclude that all fiction that resembles litfic is also boring.

@roachpatrol’s “horror and magic and wonder,” and @jumpingjacktrash’s “weird and bloody and gross and sublime and shit does not stop from happening ever” – these are things that good writers can do, and they choose subject matter they can do it with, and if that subject matter happens to be “real-world”?  If the writer’s good, it won’t matter.

The Haunting of Hill House has supernatural elements and We Have Always Lived In The Castle doesn’t, but they’re both Shirley Jackson on full blast, and the latter is actually more powerful for me than the former.  And not because it’s more “serious” (if anything, it’s less “serious,” and better).

No matter what you think of Lolita: is there no horror or wonder there?  Or in Pale Fire?  (Or only if Pale Fire actually has supernatural elements after all, like Brian Boyd thinks?)

Is Hamlet only good because of the ghost?  Macbeth because of the witches?

(The three authors I mention here wrote stuff both with and without supernatural elements, which is appropriate for the point: they just wrote good stuff.)

(via lovecrafts-iranon)

ozymandias271:

nostalgebraist:

@ozymandias271​ (continuing thread from here)

Yeah, but you’ve got to adjust it based on uncertainty. Like, if you think there’s a 50% chance (just making up numbers) that Roodman is right, then it’s like $70/QALY, I think? which is still super cheap. but probably most people put a higher than 50% credence on Roodman being right. I think AMF still works out as cheaper than having kids if you put a less than 99% probability on him being right. (I realize you’re not a Bayesian, but ‘this person might be super-wrong’ has to go in your EV calculations somewhere)

(anyway, GiveDirectly is like ~$700/QALY– based on it being about twenty times worse than AMF, which is the high end of GiveWell’s cost effectiveness estimates– and doesn’t have weird population ethics considerations, so having kids definitely doesn’t beat out cash transfers for total utilitarians)

(I probably should have used cash transfers instead of AMF in the original post)

Ah, yeah, that is true.  Strictly speaking, Roodman himself isn’t advancing the claim that declines in mortality are matched 1:1 by declines in fertility – he’s trying to estimate how big that mortality:fertility ratio is, and he himself is very uncertain about it, and says it varies a lot per country.

So if you were doing an EV calculation, you’d have some probability distribution over possible values of the ratio, and you’d get an EV for QALYs out of that.  A subtlety that is you’d have to attach some probability to bednets removing QALYs, because some studies find that every life you save will avert 2 births, or 3, or even 5 – although Roodman is quite skeptical of these numbers.

As you say, there’s no point in arguing too much about this particular issue because one could just compare to something like GiveDirectly instead.  But do we know the fertility effects of GiveDirectly?  I just did some Googling around about UCTs and birth rates, and got (not unpleasantly) lost in a many-tab Google Scholar rabbit hole like I always do, and it looks like there’s little out there about it (FWIW, UCTs reduce of rates of teen marriage in Malawi, except not necessarily, blah blah, it’s all very hazy)

The bigger point here is that we generally don’t know the fertility effects, and there’s no total utilitarian version of GiveWell that has studied the issue for us in a responsible way.

Indeed, if we care about birth rates as much as a total utilitarian would, we’d have to look into all sorts of “birth rate interventions” that GiveWell would never look at because they don’t help people outside of making them have more kids.  Some of which might be ultra-”effective,” or maybe not, but again there’s no total!GiveWell that has studied the issue for us.  (Given that, say, education for women has a big negative impact on birth rate, I can imagine total!GiveWell making some, uh, unusual recommendations)

Basically, being a total utilitarian (of the sort assumed above) and an EA is impossible at this point – the EA community has had no interest in asking the effectiveness questions such a person would need answers to.  In other words, one could try to combine total u. and EA but only by re-doing all the work oneself and tuning out much of what other EAs say as irrelevant.  (Like, you can look for effectiveness estimates for any moral utility function, even paperclip maximization [if you believe paperclip maximization is the one true good], and at some point calling this “EA” rather than just “E” seems silly.)

But you have to be that sort of total utilitarian to conclude that having children is effective.  So, for actual EAs, having children is not effective.

I mean, I don’t think that Hypothetical Total!EA is any weirder than, say, Brian Tomasik, who already exists and has concerns about the effect of GiveWell top charities on insect populations (and has a completely different reason to be worried about AMF reducing fertility rates in the developing world!).

I’m inclined to round flow-through effects to zero when we really really don’t know what’s going on– if we don’t know how UCTs affect fertility, going ‘this has positive effects on some things I care about and I don’t know how it affects other things’ is a decent alternative to trying to get funding to research it yourself. (I mean, you sort of have to do that at some point anyway– what are the effects of malaria eradication on the far future? who knows?) 

Agreed that total utilitarian charity evaluator would probably be a good idea for total utilitarians, but I’m not sure what that has to do with the main point?

I don’t actually think having children is necessarily effective for total utilitarians– $5500/year is relatively cheap all things considered, but I have a hard time seeing an argument that it outperforms GiveDirectly. I do think that total utilitarians shouldn’t be apologetic about having children, although part of how I’ve changed since I wrote that post is that I no longer think it’s a good idea to have the “lo, the concessions we make to our weak monkey selves!” discourse about anything. And possibly primary caregiver is an effective career for a total utilitarian with a strong personal fit, which includes a passionate desire to have lots and lots of children and a strong reason to believe they’d be a better primary caregiver than most people. but tbh I’m not sure of that.

OK, I think we mostly agree?

I don’t actually think having children is necessarily effective for total utilitarians– $5500/year is relatively cheap all things considered, but I have a hard time seeing an argument that it outperforms GiveDirectly.

Right, but we also don’t know that GiveWell top charities are the best you can do if you’re a total utilitarian – maybe there are some very cheap ways to spur population growth that would be even more effective by total u. standards than $38/DALY, but no one talks about them because everyone else thinks population growth is neutral or bad.

It’s not that I necessarily think this is likely – the point is that “here’s having kids, here’s GiveWell top charities” is comparing apples to oranges because if you’re a total utilitarian, you have to check what the best charities for total utilitarians are, which GiveWell can’t tell you.  At most you can get the negative result that (as you say) having kids isn’t nearly as effective as some other things.  But is it “relatively cheap, all things considered”?  We haven’t done an analysis of the best total u. charities and so we don’t know what “relatively cheap” means.

I mean, I don’t think that Hypothetical Total!EA is any weirder than, say, Brian Tomasik, who already exists and has concerns about the effect of GiveWell top charities on insect populations (and has a completely different reason to be worried about AMF reducing fertility rates in the developing world!).

This is definitely true.  I apologize for stressing this point yet again, but – if you give advice for EAs, and the advice is only interesting / nontrivial if you’re Brian Tomasik, it isn’t really “advice for EAs.”

In your post, you wrote:

First: I see a lot of effective altruists who plan on having children say “well, it’s really expensive, but nobody is a perfect utilitarian.” This seems to me to be unnecessarily apologetic. If you imagine a spectrum of Not Perfectly Effective Things, where giving to Oxfam is on one end and lighting a bunch of money on fire is on the other, having children is clearly more toward the Oxfam end than the lighting money on fire end. For a total utilitarian, having a child is equivalent to paying $450/month out of pocket for a medication that will keep someone they love alive– perhaps not what a perfect utilitarian would do, but if someone calls you on it you can go “what the fuck, asshole.” (Average utilitarians may continue to be apologetic.)

But these real-life EAs you see, being apologetic, are almost certainly not total utilitarians – or if they say they are, they’re not really thinking it through carefully enough.  So this advice has no practical force.  The set of people here who “may continue to be apologetic” is the whole set of people you’re talking about, or virtually all of it.

(None of that means that they necessary should be apologetic – the “apologetic” thing is a tough moral question that may depend on personal values.  But this isn’t the answer.)

(via bpd-dylan-hall-deactivated20190)

@ozymandias271​ (continuing thread from here)

Yeah, but you’ve got to adjust it based on uncertainty. Like, if you think there’s a 50% chance (just making up numbers) that Roodman is right, then it’s like $70/QALY, I think? which is still super cheap. but probably most people put a higher than 50% credence on Roodman being right. I think AMF still works out as cheaper than having kids if you put a less than 99% probability on him being right. (I realize you’re not a Bayesian, but ‘this person might be super-wrong’ has to go in your EV calculations somewhere)

(anyway, GiveDirectly is like ~$700/QALY– based on it being about twenty times worse than AMF, which is the high end of GiveWell’s cost effectiveness estimates– and doesn’t have weird population ethics considerations, so having kids definitely doesn’t beat out cash transfers for total utilitarians)

(I probably should have used cash transfers instead of AMF in the original post)

Ah, yeah, that is true.  Strictly speaking, Roodman himself isn’t advancing the claim that declines in mortality are matched 1:1 by declines in fertility – he’s trying to estimate how big that mortality:fertility ratio is, and he himself is very uncertain about it, and says it varies a lot per country.

So if you were doing an EV calculation, you’d have some probability distribution over possible values of the ratio, and you’d get an EV for QALYs out of that.  A subtlety that is you’d have to attach some probability to bednets removing QALYs, because some studies find that every life you save will avert 2 births, or 3, or even 5 – although Roodman is quite skeptical of these numbers.

As you say, there’s no point in arguing too much about this particular issue because one could just compare to something like GiveDirectly instead.  But do we know the fertility effects of GiveDirectly?  I just did some Googling around about UCTs and birth rates, and got (not unpleasantly) lost in a many-tab Google Scholar rabbit hole like I always do, and it looks like there’s little out there about it (FWIW, UCTs reduce of rates of teen marriage in Malawi, except not necessarily, blah blah, it’s all very hazy)

The bigger point here is that we generally don’t know the fertility effects, and there’s no total utilitarian version of GiveWell that has studied the issue for us in a responsible way.

Indeed, if we care about birth rates as much as a total utilitarian would, we’d have to look into all sorts of “birth rate interventions” that GiveWell would never look at because they don’t help people outside of making them have more kids.  Some of which might be ultra-”effective,” or maybe not, but again there’s no total!GiveWell that has studied the issue for us.  (Given that, say, education for women has a big negative impact on birth rate, I can imagine total!GiveWell making some, uh, unusual recommendations)

Basically, being a total utilitarian (of the sort assumed above) and an EA is impossible at this point – the EA community has had no interest in asking the effectiveness questions such a person would need answers to.  In other words, one could try to combine total u. and EA but only by re-doing all the work oneself and tuning out much of what other EAs say as irrelevant.  (Like, you can look for effectiveness estimates for any moral utility function, even paperclip maximization [if you believe paperclip maximization is the one true good], and at some point calling this “EA” rather than just “E” seems silly.)

But you have to be that sort of total utilitarian to conclude that having children is effective.  So, for actual EAs, having children is not effective.

ozymandias271:

nostalgebraist:

ozymandias271:

sinesalvatorem:

@ascerel​: I’m sorry competing needs are happening on your post.

Me: I don’t get it. I’m the only child-having natalist rationalist I know of, but people are talking about how natalist utilitarianism is all pervasive.

Me: I feel like how I imagine poor Whites react to hearing they’ve been oppressing everyone.

@ascerel​: I was confused about this too. The only natalist person I could think of is Ozy.

Me: Is Ozy actually natalist? I don’t think I ever heard them claim fertility in general should go up. Just that theirs should be high, and other people shouldn’t worry if they want kids, too.

@ascerel​: I feel like they’ve written a blog post about this, but my memory is bad.

Me: Well, if the blog post is in fact natalist, then I’d like to welcome Ozy as my ally. Because the rest of the community doesn’t seem too favourable.

Me: Like, every time I have a conversation about kids on Tumblr, I feel a strong social pressure to not have any kids (for any of a dozen reasons), which I compensate for by pretending to be irrationally baby-crazed (when I’m just moderately baby-crazed), so that people will give up on trying to talk me out of it.

Me: But that signalling is the only thing I could think of that might be contributing to a feeling that the community is natalist, which others are talking about.

@ascerel​: I didn’t get the impression that your blog alone, or even combined with Ozy’s blog, produced a feeling that the community is predominantly pro-natalist. Maybe we’re both missing some other people, or some people are sensitive about this.

@ascerel​: We should probably ask them to clarify.

@ascerel​: I don’t think you did anything wrong, fwiw.

Me: Thanks

Me: I’m not sure how I’d go about finding those other natalists, though? Cuz, like, if I did, I’d already be hanging out with them.

Here’s the blog post ascerel was probably thinking of. tl;dr: for total utilitarians having children is good but not competitive with GiveWell top charities; for average utilitarians having children is a waste of money from an ethical perspective.

I am in favor of parenting specialization, in which people who want kids have Many and people who don’t want kids don’t have any.

The same reasoning that says having children is good for total utilitarians also says that the AMF is much worse than GiveWell thinks, because saving lives tends to lower rates of reproduction.

Other GiveWell top charities may be fine, but since GiveWell doesn’t care about population growth/decline in the same way (insofar as it cares at all, it considers population growth bad because of overpopulation), none of their analyses can be trusted without independent investigation by a person subscribing to this kind of total utilitarianism.

IIRC, that analysis wasn’t out when I wrote the post. Cost per QALY of kids and bednets are still a couple orders of magnitude apart though, so I would be extremely surprised if that caused kids to outweigh bednets. Agreed that total utilitarians probably have different top charities.

I find it somewhat strange to have that post characterized as pro-natalist, given that my conclusion is that having kids is worse than donating money for pretty much everyone.

(ftr, my position on population ethics is “AAAAAAAAAAAA”, and I’ve gotten significantly more egoist since I wrote that post, but I still think I was right for what people ought to do given particular value sets.) 

The analysis by David Roodman was linked on the GiveWell blog on 4/17/2014, your post was posted 9/30/2015.  The Michael Dickens post I linked was posted 5/17/2016.  Dickens’ post was what called my own attention to the issue, but I don’t pay very close attention to GiveWell.  (Not implying you have some responsibility to read every post on the GiveWell blog as it appears, just clearing up the record.)

Orders of magnitude are misleading here, because we’re dealing with a subtraction rather than a division.

That is, if we save the life of Person A, but as a result Person B never gets born, then we gain the QALYs of Person A’s life and lose the QALYs of person B’s life.  The net result is just (A’s QALYs) - (B’s QALYs).  This number could be arbitrarily close to zero, or even negative, depending on the specifics of the case.

By contrast, if we were talking about something like A having lower quality of life, that’d be a multiplication: (Some number less than 1) * (A’s QALYs).  Here it makes sense to think in terms of orders of magnitude: it takes something really bad to multiply someone’s lifetime QALYs by 0.1, or 0.01.

DIckens writes:

A GiveWell-commissioned report suggests that population will hardly change as a result of AMF saving lives. 

I’m having difficulty extracting this conclusion from Roodman’s actual report, which is much more tentative and doesn’t mention AMF’s work directly.  But to make the point about subtraction vs. division, let’s assume this is true (it at least could be true).

The exact details will depend on how good the lives of the “saved people” are vs. the lives of the people they cause not to be born, and various other effects on the happiness of the parents, other social effects, etc.  But GiveWell’s rec isn’t based on that stuff; it’s based on the idea that saving a life produces lots of QALYs.

If instead it just takes QALYs from a “Person B” and gives them to an equally happy “Person A,” it’s literally having zero effect on total utility.  Zero.  Which means the cost per QALY is infinity.

If we don’t get exact cancellation, but we’re still subtracting numbers that are pretty close, we’ll be generating (or perhaps losing!) a very small number of QALYs.  The denominator in “cost per QALY” will be very small and cost per QALY will be very large.

(via bpd-dylan-hall-deactivated20190)

togglesbloggle:

plain-dealing-villain:

togglesbloggle:

nostalgebraist:

This post is about me being a n00b and I hope you can enlighten me

I currently use the Mac OS and I already use the Unix command line all the time.  It seems sensible to try to think of myself as “someone who uses a type of Unix,” rather than “someone who uses the specific products whose pleasant aesthetics derive from Steve Jobs’ personal sense of Platonic ‘correctness’ and which are produced under (as far as I can tell) inhumane conditions even by the standards of the industries involved”

The biggest problem with this plan is that installing things which are specifically made for Steve Jobs’ private cosmos is painless, while installing things which are made for Unix users are painful

There are things I want to use, and they have dependencies.  Sometimes these dependencies take a long time to install and take up a great deal of disk space.  What I actually wanted was just whatever little bit of functionality in the dependency is used by the the tool I want.  But I don’t know how to get just that, and so I end up setting aside gigabytes of infrastructure because somewhere in those gigabytes is a simple function that the tool wanted to call, or some object definitions the that author of the tool found convenient.

This is a problem even with things that are not compiled from source – say, a simple python script which won’t run unless i install a gigantic set of modules.  If I am compiling from source, other problems may arise.  It may not compile with my default compiler and I may have to try another one, or hunt down the combination of flags the thing wants.  It may have expectations about my directory structure which aren’t true.  I may have to help it deal with multiple existing versions of the same thing, some or all of which were installed by other simple tools that “depended on” them.

But when I get a native Mac installer for the tool, it is always small if the tool is small.  It will basically always work with no extra help from me, and it will not want to put lots of extra stuff on my hard drive.  You would think that compiling from source would result in smoother compatibility than with binaries, making everything work with my specific configuration.  But Mac-specific binaries always work perfectly and are always small for things that seem like they should be small.

Am I doing something wrong?  At the moment, if I get a “native Mac installer” I expect to be able to use the thing within 5 minutes and for the thing to take up almost no space if it is simple.  If I get a normal sort of thing from github which doesn’t care that I have a Mac, I prepare to potentially spend an afternoon installing it and to let it have a gig.  If I declare that I will have nothing to do with Steve Jobs’ turtlenecks-and-Foxconn world anymore, I will no longer get the former.  This feels wrong.  Is it actually natural?

The general rule of thumb is that for a home Linux machine, your root directory will almost never get above 25 gigs.  There are some exceptions, mostly for people that are trying to run an elaborate home server setup or work in a large number of different environments each with their own vast network of dependencies or something, but in general you can expect a fairly lean library.  (Linux boxes can of course get *absurdly* small, but that’s for custom situations and not general use.)

I have never used a Mac machine with a console habitually, so I don’t have much basis for comparison, but the difference may be that the Mac has a poor package manager?  On Ubuntu, for example, apt-get is the standard tool for downloading and installing programs, and usually makes good choices about dependencies.  Follow-up with apt-get autoremove and apt-get clean will pare down unused dependencies automatically and reliably, although I don’t think they have the kind of laser-like focus to get rid of every single unused function (that would be pretty scary, anyway!).  If Homebrew doesn’t handle cleanup as elegantly (or uses more of a shotgun approach to dependencies) this may be the root of your problem.

That said, I don’t think the Linux installation process will ever be as clean as the Mac one; walled gardens have their advantages, and a lot of the elegance will come from Apple’s strong control over APIs and the app ecosystem.  The Linux advantage is always going to be variety, not simplicity.  So yeah, I’d say your instincts are wrong.

Fun contextual story: I just upgraded my home box from Ubuntu 14 to 16 (which should tell you about where my level of expertise is, i.e. not much).  I partitioned home and root directories separately back when I set the machine up, and didn’t leave much space for root- a little under 14 gigs.  That’s been enough, but doesn’t leave me a whole lot of breathing room, and the upgrade process wanted 3 gigs of free space that I didn’t have.  Sensible cleanup only got me two and a half.  So I deleted all of Unity (that is, the graphical desktop environment; it’s a bit bloated and much loathed in the community), navigated the upgrade process from the terminal, and then reinstalled Unity in version 16.04 of the OS.  When I think about reasons to use Linux at home, it’s more tricks like that, rather than an ‘everything just works’ sort of feeling.

Mac OS has no package manager at all, unless you download Homebrew. Which every programmer who’s worked in a Mac shop does, but @nostalgebraist doesn‘t program enough that it’s a guarantee.

That would do it! @nostalgebraist, you should certainly use a package manager if you don’t already. That will change your expectations for Steve-free computing for the better.

Ah, I see.  (Well, I sort of see.)  Thanks.

I have Homebrew and pip and Luarocks, and I use them, because I’ve been told to by installation instructions.  However, I now realize that I had not really known what they were, so thank you for alerting me.

For instance, I think I need to be making use of the cleaning functions?  But I am a n00b and need to be pointed to an explanation of exactly what these do and how they compare between managers.

For instance, I just spent 5 minutes on Google trying to ask what the Homebrew equivalent of “apt-get autoremove” is.  As far as I can tell, there isn’t one.  There is “brew cleanup”, which removes outdated packages and also cleans the cache (the latter is the equivalent of “apt-get clean” I think?), but that is not the same as removing unused dependencies that were automatically installed.  I also found this thread (only 25 days old), which states that Homebrew doesn’t track which packages were automatically installed and which were requested by the user.

So now I’m in the following situation: I know that I should be using package managers.  The reason for this is that there are certain core desirable things that a package manager will do for me, which I should (presumably) expect from anything that is called a “package manager.”  I read @togglesbloggle‘s helpful post and inferred “ah, the things that ‘apt-get clean’ and ‘apt-get autoremove’ do must be among these core desirable things.”  But now it seems that Homebrew can’t do what “apt-get autoremove” does.

What’s missing for me here is an explicit description of what these “core desirable things” are, and how to figure out what to do with a new package manager if presented with one.  For instance, even if I figure out Homebrew, I also want to understand package managers well enough to know, right away, what I ought to expect out of pip and LuaRocks.

P. S. this is a sidenote, but it’s amusing and illustrative of the difficulties I have when trying to make sense of all of this Unix installation complexities.  In the thread I linked above, one poster argues (I think??) that Homebrew shouldn’t have an equivalent of “apt-get autoremove” – in an almost beautifully high-context post which I think I can halfway make sense of, but which would require (at minimum) its own round of Googling to really get:

Should this be closed then? I don’t see much actual user benefit here other than possibly basically irrelevant disk space savings. The costs of inevitable reinstalls of the same build-time only deps over and over outweighs that ten-fold unless we’re adding some overwrought DSL for marking some build-time deps sufficiently “common” to merit “protection” from aggressive, needless cleaning behaviors, while others are relegated to a lesser, dispensable category. Given how broken build.with? can be, I’m also pretty sure any implementation of this is just going to break user installations unless we have the fabled declarative option system.


I’m trying to remember some of the sorts of issues I’ve had (the ones that have consumed afternoons).  Unfortunately I’d only be able to reconstruct most of the details by actually trying to install the damn things again.  But here are some things I remember:

(Cut for length and limited audience)

Keep reading

(via togglesbloggle)

genderfight:
“ nostalgebraist:
“ gruntledandhinged:
“ nostalgebraist:
“ togglesbloggle:
“ gruntledandhinged:
“ Okay, Facebook...

genderfight:

nostalgebraist:

gruntledandhinged:

nostalgebraist:

togglesbloggle:

gruntledandhinged:

Okay, Facebook whyyyyyyyyyyyyyyy

http://www.britishscienceassociation.org/news/rise-of-artificial-intelligence-is-a-threat-to-humanity

“The survey found that opinions on artificial intelligence differed by age or sex. Perhaps surprisingly, only 17 per cent of women felt optimistic about their development, compared to 28 per cent of men. 13 per cent of men believe they could be friends with a robot (as opposed to 6 per cent of women…”

Harvard Psychologist Invents New, Slightly More Embarrassing Form of Typical Mind Fallacy

(Although I can’t find the original data, so it’s theoretically possible that women are also less likely to be pessimistic, with a larger ‘undecided’ faction. Or, perhaps men are more likely to believe that AI technology is realistic enough to worry about. But really.)

Similar result was found here (p. 70, 21% of men and 42% of women answered “yes” to “Do the promises of artificial intelligence scare you?”, N = 419)

yeah, I’m actually familiar with this result. My gripe was more (1) the framing of the article—’only men fear’ is really different from 42% women—and (2)the ‘it’s alpha male brain’ explanation that Pinker offered in the video

It’s not just the framing, is it?  The effect we see in the surveys is in the opposite direction (more women than men fear AI) from the supposed effect that is being “explained” here.  “Only men” is not just too extreme but also backwards.

Super-anecdotally though: the entire comment thread is the closest I’ve come to being tempted by the phrase “butthurt neckbeard manbabies.”* It is a sea of white men stretching beyond the horizon, all blusteringly explaining why yes, Virginia FAI is a really big deal in sweaty, jargon-riddled paragraphs.

* I hasten to add: this is not a feeling I typically experience. Heck, that species of nerd-hate is one of the things that sets my teeth on edge. But for just a moment, reading that thread? I felt it.

Ah, yeah, I’m definitely willing to believe that “has very specific opinions about AI risk” rather than just “worries about AI” is a thing that skews heavily male.  Although at that point we’re talking less about “fear” and more about “having certain tech/CS opinions (and being willing to talk about them online),” and tech/CS stuff already skews heavily male (I’m not speculating as to why, this is just a much-worried-about fact).  Not sure how we would tell whether we’re seeing anything more than that base rate here.

(I’m not sure what this means, but if we’re looking at the real AI risk “hard core” – MIRI’s research staff is currently 2/8 female, and IIRC the rate has been similar as people have cycled in and out)

(via genderfight)

jack-v:

nostalgebraist:

nuclearspaceheater:

nostalgebraist:

nuclearspaceheater:

nostalgebraist:

But if your skill always increases, even if just a little bit, then as long as you keep going, you will win. There are no dead-ends for monotonic functions, my friend.

This quote is such a perfectly empty use of mathematical terminology.  There’s even a link to the Wikipedia page on monotonic functions, just so you can bone up on the smart important concepts the authors knows – except the sentence sentence is literally just a restatement of the first using mathematical language.

In other news, computational complexity is no big deal, even log(x) is monotonic :P

(The quote is from this post by some LW guy claiming, with very little in the way of argument, that you can learn to read Latin using Anki without explicitly learning grammar, by drilling yourself on sentence / translation pairs.  Of all the languages to claim this about, Latin, really?  Good luck reading 60-word sentences with large gaps between nouns and the verbs/participles they agree with, dude)

(P. S. I got some welcome catharsis from a wonderfully snarky comment on the first installment, and only then noticed that it was by @slatestarscratchpad :) )

It’s not even true. f(x) = 0 is monotonic.

Oh, right, he should have “strictly”

lmao

Not even then. f(x) = - (x^-1) where x > 0 is strictly increasing, and never exceeds 0.

Wait, shit, you’re right

The concept of a monotone function is not relevant here, what he wants is “something that has no upper bound as x grows large,” and there are functions that keep increasing toward a finite asymptote

So the first sentence isn’t even true, which might not have been obvious, but then he cues the reader to think about it in math terms, which if done right will demonstrate that it isn’t true

image

I hate it when people randomly state things in maths terminology as if it made them make more sense (*cough* william lane craig *cough*).

But even though the details are wrong, I thought this guys point was fairly clear. And not completely true, but often useful, in encouraging people to keep going when they’re getting somewhere but not as fast as they’d like.

After all, if you’re talking about integers not real numbers, the analogy would work. And I think even mathematicians think in terms of “not ALL monotonically increasing functions are strictly monotonic, and the exceptions are important to have a clear definition for, but that’s sort of what you expect. Ditto for unbounded. ”

In this case, I think the mathematical problem is directly relevant to the problems with his approach to Latin.  His approach involves one specific type of training, and you’ll keep getting better at the thing it trains, while stagnating in other areas.

His approach teaches individual word definitions, so if you use it, you will indeed get a bigger and bigger vocabulary, and since no one’s ever going to know Latin vocab perfectly, this can indeed go on forever, more or less.  But if you can’t learn grammar from his approach, then you’ll never become very good at reading Latin overall.  You’ll keep getting better forever, but never be very good, just like one of the functions we’re talking about.

(via jack-v)