Install Theme

fnord888:

nostalgebraist:

fnord888:

nostalgebraist:

plain-dealing-villain:

wirehead-wannabe:

fatpinocchio:

It’s surprisingly difficult to come up with a satisfactory theory of population ethics.

Man, I have the complete opposite impression. Simple total utilitarianism seems obviously correct to me , at least on a system-2 level.

Simple average utilitarianism seems correct. All you get as downside is the ‘sadistic conclusion’, which is obviously true anyway and ought to fall out of any well-functioning ethical system.

I don’t think the sadistic conclusion (defined in this paper) is obviously true, although it strikes me as much less bad than the repugnant conclusion (it’s the difference between “repugnant” and “kinda weird”).

But I think none of these theories is complete without a clear sense of what positive and negative utility mean (in the real world) and where the zero point is supposed to be located.  Much of the confusion over these “conclusions” arises from the fact that people have no clear picture of what a “life (not) worth living” actually looks like.  But a lot of the content of the ethical theory lies in how you define this, since in practice, it determines whether you think a person’s existence is a good thing or not.  (In total utilitarianism this is always the case, while in average utilitarianism this is only approximately true, for large populations, but in practice we are faced with a large global population.)

I talked a lot more about this here.  Specifying a zero point for utility, in human terms, seems really hard, and I’m not satisfied with any utilitarian theory that hasn’t done that work.

You don’t have to specific a zero point for utility for anything except total utilitarianism (although I agree that having to do so is a serious problem for total utilitarianism). 

In average utilitarianism, whether a person’s existence is a good thing is dependent on the average of everyone else’s utility. You can call that the zero point if you like, but it’s more a case of being able to pick any zero point you like, because choosing a zero point is just applying a constant offset to everyone’s utility (which won’t change any utility comparisons, and hence won’t affect any decisions). 

Of course, the so-called “sadistic conclusion” becomes much less striking of a problem you acknowledge that a “life worth living” isn’t a coherent concept*, and you can just do the math on whether a given population change increases or decreases average utility. That’s a point in favor of average utilitarianism, not against it!**

Likewise, the various versions of person-affecting ethics (of which I am a fan***) don’t require you to calculate a zero point. You don’t even have to do an absolute interpersonal utility comparison (ie, this person has higher utility than that person): you’re only ever concerned with changes in utility (though of course you still have to be able to do interpersonal comparisons on the changes).


*Or, if you define “a life just barely worth living” as “a life with exactly average utility”, the sadistic conclusion doesn’t follow from average utilitarianism, because you’ll never want to create a below average life instead of any number of above average lives.

**The weird part of average utilitarian population ethics is the part where a whether human on Earth having a baby is good or bad is decided by how happy the aliens in the Andromeda Galaxy are.

***Because they avoid the Andromeda Galaxy conclusion.

No, you’re right – I wasn’t thinking correctly about average utilitarianism there.

My qualms with A.U. are basically that there are cases where my intuitions seem more “total,” just like my qualms with T.U. are that there are cases where my intuitions seem more “average.”  The former are cases where certain lives really do seem intrinsically bad, such as people in agony who would commit suicide if they could, but can’t.  (Cf. Parfit’s two/three hells.)

Overall my objections to AU are far less worrying than my objections to TU, but for the contingent, practical reason that my problems with AU come up much less often in the cases I happen to be faced with.  AU has problems with people who want to commit suicide but can’t; TU has problems with everyone, in that it needs to specify a zero point, and that the usual low placement of the zero point (“life is worth living”) demands that we place an absurdly high value on the creation of children.

But I don’t feel that AU is the correct theory; if I lived in one of Parfit’s hells then I would find the implications of AU abhorrent.  It just so happens that I don’t.

(ETA: on top of all of this, we have the problem of putting hard numbers on utility differences IRL.  Tradeoff questions only get you so far)

I almost want to shout “have you heard the good news about the person-affecting principle?”, but it’s not clear to me if your only problem is that A.U. makes the creation of people in the lesser-but-still-worse-than-death Hell B actively good (which person-affecting morality does away with, because creating a new person in Hell B isn’t good for anyone in Hell A, even if it raises the average utility of the A+B system), or your intuition is that creation of people in Hell B is actively evil (which person-affecting morality doesn’t provide, at least not straightforwardly, though it allows the ongoing existence of those people to be considered morally worse than allowing them to commit suicide).

I’ve heard the good news, but I do think that creating people in Hell B is actively evil.  The basic feeling behind person-affecting views is appealing to me, but it seems vulnerable to this basic problem of “callousness to persons who will exist”: we ignore wishes we know people will have in the future simply because they don’t exist now.

The idea I want to capture is “nonexistent people can’t have preferences about whether to be created or not, but it’s wrong to create a person if you know they’ll want to be un-created, but won’t be able to.”

I have a feeling that there is some simple and elegant answer here involving asymmetries between creation and destruction.  A nonexistent person keeps not existing until someone chooses to create them; an existing person keeps existing until someone or something destroys them.  I am getting that sort of tingling, tip-of-your-tongue feeling I get when I feel like I am glimpsing the answer to a math problem intuitively, but don’t how how to formalize it yet.  (I thought about it for 15 minutes last night without getting anywhere, so I’m going to shelve it for now lest it become a new obsession.)

(via fnord888)

fnord888:

nostalgebraist:

plain-dealing-villain:

wirehead-wannabe:

fatpinocchio:

It’s surprisingly difficult to come up with a satisfactory theory of population ethics.

Man, I have the complete opposite impression. Simple total utilitarianism seems obviously correct to me , at least on a system-2 level.

Simple average utilitarianism seems correct. All you get as downside is the ‘sadistic conclusion’, which is obviously true anyway and ought to fall out of any well-functioning ethical system.

I don’t think the sadistic conclusion (defined in this paper) is obviously true, although it strikes me as much less bad than the repugnant conclusion (it’s the difference between “repugnant” and “kinda weird”).

But I think none of these theories is complete without a clear sense of what positive and negative utility mean (in the real world) and where the zero point is supposed to be located.  Much of the confusion over these “conclusions” arises from the fact that people have no clear picture of what a “life (not) worth living” actually looks like.  But a lot of the content of the ethical theory lies in how you define this, since in practice, it determines whether you think a person’s existence is a good thing or not.  (In total utilitarianism this is always the case, while in average utilitarianism this is only approximately true, for large populations, but in practice we are faced with a large global population.)

I talked a lot more about this here.  Specifying a zero point for utility, in human terms, seems really hard, and I’m not satisfied with any utilitarian theory that hasn’t done that work.

You don’t have to specific a zero point for utility for anything except total utilitarianism (although I agree that having to do so is a serious problem for total utilitarianism). 

In average utilitarianism, whether a person’s existence is a good thing is dependent on the average of everyone else’s utility. You can call that the zero point if you like, but it’s more a case of being able to pick any zero point you like, because choosing a zero point is just applying a constant offset to everyone’s utility (which won’t change any utility comparisons, and hence won’t affect any decisions). 

Of course, the so-called “sadistic conclusion” becomes much less striking of a problem you acknowledge that a “life worth living” isn’t a coherent concept*, and you can just do the math on whether a given population change increases or decreases average utility. That’s a point in favor of average utilitarianism, not against it!**

Likewise, the various versions of person-affecting ethics (of which I am a fan***) don’t require you to calculate a zero point. You don’t even have to do an absolute interpersonal utility comparison (ie, this person has higher utility than that person): you’re only ever concerned with changes in utility (though of course you still have to be able to do interpersonal comparisons on the changes).


*Or, if you define “a life just barely worth living” as “a life with exactly average utility”, the sadistic conclusion doesn’t follow from average utilitarianism, because you’ll never want to create a below average life instead of any number of above average lives.

**The weird part of average utilitarian population ethics is the part where a whether human on Earth having a baby is good or bad is decided by how happy the aliens in the Andromeda Galaxy are.

***Because they avoid the Andromeda Galaxy conclusion.

No, you’re right – I wasn’t thinking correctly about average utilitarianism there.

My qualms with A.U. are basically that there are cases where my intuitions seem more “total,” just like my qualms with T.U. are that there are cases where my intuitions seem more “average.”  The former are cases where certain lives really do seem intrinsically bad, such as people in agony who would commit suicide if they could, but can’t.  (Cf. Parfit’s two/three hells.)

Overall my objections to AU are far less worrying than my objections to TU, but for the contingent, practical reason that my problems with AU come up much less often in the cases I happen to be faced with.  AU has problems with people who want to commit suicide but can’t; TU has problems with everyone, in that it needs to specify a zero point, and that the usual low placement of the zero point (“life is worth living”) demands that we place an absurdly high value on the creation of children.

But I don’t feel that AU is the correct theory; if I lived in one of Parfit’s hells then I would find the implications of AU abhorrent.  It just so happens that I don’t.

(ETA: on top of all of this, we have the problem of putting hard numbers on utility differences IRL.  Tradeoff questions only get you so far)

(via fnord888)

oligopsonoia:

anosognosicredux:

anosognosicredux:

nostalgebraist replied to your post:

collapsedsquid: nostalgebraist: I find Leah…

my issue is getting past “From there, the leap to believing in God and the Catholic Church isn’t that big.” why not just be an ordinary platonist? it’s true that there has been a great deal of /interest/ in this kind of issue in catholicism (the giant scholastic war over the problem of universals), but even there everyone disagreed. realism on the problem of universals isn’t a core part of catholic belief – e.g. it’s not what the core prayers are affirming.

Yeah, I get it. But the point of entry need not be the core beliefs. Once you build a Thomistic/Aristotelian intellectual superstructure, I imagine a lot of things about Catholicism do just click (just as I once was the kind of atheist who’d throw up my hands in frustration that Catholic arguments made no sense and, through a lot of contact with Catholic writing, can now make sense of how intelligent people can derive certain conclusions given their basic assumptions, so I guess I can also see how someone might go further down that path). And I recall part of her process was sort of role-playing the worldview and finding that it fit.

I think what’s also important is that Leah talked about taking an apophatic approach in the process of her conversion, which makes sense to me of a gradual acceptance of the particular supernatural claims of Catholicism. Of course, apophatic theology is explicitly irrational, but at the same time I find it one of the most interesting and sound forms of irrationality.

Also, I understand the impulse to avoid being too specific, because the more you assert, the more you’re open to being wrong–the phlogiston thing, the conjunction fallacy, etc.

OTOH, that requires a superseding commitment to rationality above all else. Whereas I think Leah is making a different sort of commitment here, to virtue-ethical and mental flourishing–a choice that we might bathetically call a kind of mind-hacking. A rational choice, but perhaps not a choice of rationality.

I think Traditions (in the sense of, like, a body of intellectuals with common assumptions, “research problems,” and language) are Extremely Useful, and indirect evidence on some tradition getting things right that others don’t can be tremendously important in making the whole package preferable.

That makes sense, and I think I do this to some extent.  (Certainly I do a similar thing with individual people – I start paying close attention to a person who makes a lot of claims when I notice that they tend to be correct/insightful when they touch on subjects I’ve thought a lot about myself)

However, I worry about the possible bait-and-switch here where “interest in an intellectual tradition” and “conversion to a religious tradition” get conflated.  “Being a Catholic” doesn’t consist in looking at the history of Catholic thought and saying “yeah, this is good shit”; it involves signing on simultaneously to a whole lot of belief-pieces which Catholic thought has argued for, laboriously, in an individual case-by-case way, and is also a social commitment to a certain group, the Catholic Church, which is not identical with the tradition of Catholic thought, and has its own distinct tradition of epistemological practice.

(Abelard was a brilliant theologian who had a profound influence on Catholic thought; he was also officially condemned for heresy twice.  In the first case he was ordered to burn the offending treatise and put under house arrest.  In the second a papal bull excommunicated him and all his followers and condemned him to perpetual silence, although thanks to his popularity he got away with not complying.  I learned this the other day from the book The Mediaeval Universities by Nathan Schachner which is in the public domain and extremely entertaining)

 If you really want to think your way into the confession booth you’re going to have to take a lot of distinct steps.  And if you didn’t think your way there, don’t say you did.

I really oughtn’t talk any more about this, because there isn’t much to be gained by picking at this mental scab any more – I don’t get Libresco, this frustrates me, but her public output doesn’t help me get her, so there’s not much further to go.

(via oligopsonoia-deactivated2017053)

slatestarscratchpad:

nostalgebraist:

Following up on my more responsible earlier post about antipsychotics, time to put on my Amateur Bullshit Neuroscientist hat for a more speculative one

(N.B. I have no formal training in neuroscience or psychology or anything relevant here.  I do have something like a special interest in psychopharmacology, I guess?  I only realized this like last week, when I was reading this paper with lunch because I’d gotten curious whether Trazodone and Clonazepam affected one another’s clearance rates, and all of a sudden I realized that while Google Scholar searches including the terms “pharmacokinetics” and “pharmacodynamics” had been a regular fixture of my life for years, this was probably not true of everyone)


Anyway, the basic thing antipsychotics tend to do – the thing that makes them work – is acting as antagonists at the D(2) receptor, a particular type of dopamine receptor.  For a broad approximate understanding, just read “dopamine antagonist” as “less dopamine activity.”  The stuff dopamine does?  Less of that.  (Well, specifically less of the stuff dopamine does via D(2) receptors, whatever that means.)

(Sidenote: sometimes a given antipsychotic is an inverse agonist rather than an antagonist at some receptor, which is a bit different, but still, less of the thing.)

Some of the differences between different antipsychotics come from differences in how they affect the D(2) receptor at clinical doses.

The other big difference between antipsychotics has to do with the 5-HT(2A) receptor, a particular type of serotonin receptor.  (”5-HT” means serotonin and the serotonin receptors have names starting with 5-HT.)  The first generation of antipsychotics didn’t really affect this receptor, while the second generation – so-called “atypical antipsychotics,” although many of the ones you’ve heard of are in this class – are antagonists or inverse agonists at 5-HT(2A).

5-HT(2A) is famous as being the receptor solely responsible for psychedelic states – you can stop a psychedelic drug from being psychedelic by preventing it from affecting 5-HT(2A).  The psychedelics are agonists at 5-HT(2A), meaning they make the brain do more of the 5-HT(2A) thing.  It’s obviously tempting to say “ah ha! atypical antipsychotics are like ‘the opposite of psychedelics,’ so that must be how they work!”  But that doesn’t seem to be true; drugs that are only 5-HT(2A) antagonists don’t really work as antipsychotics, and the real benefit of the 5-HT(2A) effect seems to be in reducing some of the side effects caused by the D(2) effect, and in reducing negative symptoms of schizophrenia.


OK, so it seems like dopamine antagonism is the big deal here.  There is a very well-established theory that dopamine activity in certain midbrain areas encodes a “reward prediction error” signal used in reinforcement learning – roughly, your mental model of the world makes predictions of when the world will give you “rewards” (good stuff) and how big they’ll be, and then the model is updated when the predictions are wrong.  The “reward prediction error” is the difference between the actual reward received and the predicted reward.

Speculation time: my description of my Risperdal experience sounds a whole lot like “I became unable to sense reward prediction errors”:

The world acted upon me for mysterious reasons.  I did not draw correlations between present and past events, didn’t formulate ideas about the workings of things.  The present was simply given; I wasn’t frustrated when it refused to honor my theories.  “Reading is hard” was a datum, and was unpleasant, but I was not really surprised by it, or frustrated in the “this wasn’t supposed to happen!” way of abstract-reasoning-creatures.  It was a given datum and all I did was hope that given data would be pleasant and not unpleasant.

(Admittedly I had already had this reward prediction / antipsychotics idea before writing that, so I may have been unconsciously slanting the description.  But that is how I remember it.)

Moreover, I suspect that some antipsychotics are worse about this thing than others because they hit D(2) harder.

@kerapace​ mentioned that Seroquel didn’t do this to them while Risperdal did, and indeed Seroquel has relatively low D(2) activity as antipsychotics go (for a direct comparison of Seroquel vs. Risperdal at clinical doses, compare this to this; wider comparisons here).

@trueculprit mentioned that Zyprexa also had the zombifying effect, and indeed Zyprexa and Risperdal have similar levels of D(2) occupancy at clinical doses.  A few of those studies mention that Clozapine (Clozaril) has a lower D(2) occupancy, closer to Seroquel, so I’d predict that it is less zombifying.  (Anyone have experiences with it?)

Thanks, this is interesting. I had already thought that antipsychotic reward mechanism blocking was related to “zombification”, but thinking about it as “reward prediction errors” in particular makes things a little clearer.

I’m embarrassed to reveal my ignorance on this, but is there a simple relationship between binding affinity and receptor occupancy? I had always thought of this kind of thing as related to binding affinity, but maybe these are the same thing, or related to each other through some simple function? Depending on whether this is true or not I might have further thoughts.

Clozapine seems pretty non-zombifying to me, but I can’t say for sure how it compares to Seroquel, especially given the very different populations it tends to get used on.

Also: pimavanserin may be the elusive 5HT-2A only antipsychotic, though I’m still not clear how well it works.

As I understand it, there’s a one-to-one relationship between binding affinity and receptor occupancy.  (Or rather, to the receptor occupancy for any given concentration of the drug.)

The idea is that there’s some chemical equilibrium state where the ligand is binding to receptors at the same rate it’s leaving them.  Binding affinity constants measure the ratio of “rate the thing leaves” to “rate the thing binds” (which is why a lower number means more affinity).  If more is binding than leaving, then more receptors get occupied and this means fewer sites to bind to and more to leave, and vice versa for more leaving than binding, with the equilibrium in the middle.  The fraction of sites bound in equilibrium is determined by the affinity (and the drug concentration), and vice versa.  (This is in fact how binding affinities tend to be measured in practice, I think?)

(via slatestarscratchpad)

mitoticcephalopod asked: I'd love to hear more about what you dislike about effective altruism, if you can manage to find the vocabulary to do so.

theaudientvoid:

groatgroutgrotto:

genderlich:

mitoticcephalopod:

Yeah, but in the meantime, while we work on fixing those problems, isn’t it better for poor people to suffer less?

And also, I’d say malaria is a pretty systemic problem in parts of the world, and getting rid of it would free up a lot of other resources. And getting rid of malaria is basically one of the major effective altruism Things To Do.

I mean, yeah, probably, but the archetypal effective altruist that was described to me (and given a real person as an example) is someone who went to school for finance, became an investment banker, and lives on 40k a year while donating the rest to charity. That seems pretty inefficient to me given that charities can only do so much and are often sorta corrupt - providing mosquito nets and antibiotics is great, but at some point you gotta address the root economic issues of those countries which effective altruism just… Doesn’t really do at all.

I can see it being a component of whatever word you want to use for “the process of saving the world” but not the main, primary strategy.

actually, mosquito nets and antibiotics do improve the economies of those countries! people dying is bad for the economy (and so are people bedridden - they consume without producing), and malaria and worms are extremely tractable - once those problems are solved, they will stay solved, requiring no additional expenditure of resources. This will provide a permanent influx of labor.

but maybe by “root economic issues” you mean like warlords and all. You see, the problem is, I don’t know how to get rid of those! Most people do not! In fact, the people best suited to solving complicated economic problems in, for example, South Africa, are in fact *not* Americans or Europeans, and instead are South Africans! Many attempts by Americans at solving problems in developing countries made things *much* worse! (see also: African textile industries)

genderlich-deactivated20211204:

Tbh when I posted that I was hoping someone would help me figure out what it is I dislike about it. I guess one thing is that devoting your life to raising as much money as possible to give to the poor (or to buy resources to give to the poor) is… Fine? But doesn’t address the systemic roots of oppression that create the poor in the first place.

I can see it being a component of whatever word you want to use for “the process of saving the world” but not the main, primary strategy.

TBH I don’t really understand why this is a problem for you. Are you under the impression Effective Altruists are stopping you from doing some soul-searching to find the Primary Strategy? Because I promise you, they are not.

Oh and as for corruption look into Givewell. They evaluate charities and all, it’s very handy.

Angus Deaton argues that most foreign aid to impoverished countries gets co-opted by warlords, and therefore actively makes things worse in those countries. To my knowledge, giving bed nets to kids has yet to be compromised, but the more money that gets funneled into it, the more incentive the warlords have to do so. Deaton also doesn’t seem to have a very high opinion of Givewell.

I’m confused here – “foreign aid gets co-opted by warlords” is an argument I’m used to hearing, but almost always with the understanding that this involves the aid being received by governments which then mismanage them.  (“Giving money to governments” is almost always what “aid” means, AFAIK.)

The sorts of charities that GiveWell recommends are not generally interfacing with local governments – AMF works with local charities which distribute nets directly, GiveDirectly interacts directly with their recipients, etc.  This is the kind of activity one would promote precisely if one were worried about the co-option that happens to foreign aid.

Googling Deaton, I find this article, which is largely directed at aid to governments, and this one, which says that GiveDirectly is likely to do good things in the short run, but doesn’t solve the underlying problem of poor governance.  That’s sensible but doesn’t add up to anything like a critique of EA.  Is there something in particular by Deaton I should read?

Planning to Go on a Diet? One Word of Advice: Don’t. →

barrydeutsch:

szhmidty:

barrydeutsch:

chloekittymtfposting:

barrydeutsch:

chloekittymtfposting:

ok2befat:

“This isn’t breaking news; doctors know the holy trinity of obesity treatments—diet, exercise, and medication—don’t work. They know yo-yo dieting is linked to heart disease, insulin resistance, higher blood pressure, inflammation, and, ironically, long-term weight gain. Still, they push the same ineffective treatments, insisting they’ll make you not just thinner but healthier.

In reality, 97 percent of dieters regain everything they lost and then some within three years. Obesity research fails to reflect this truth because it rarely follows people for more than 18 months. This makes most weight-loss studies disingenuous at best and downright deceptive at worst.”

Diets do not work because they are temporary. Permanent lifestyle changes have a 100% success rate, however.

There is no “lifestyle change” method of weight loss which has been shown to reliably make obese people non-obese over the long term. And by “shown,” I mean someone documented it in a paper published in a peer-reviewed journal.

If I’m mistaken, then please show me my mistake with a citation to a peer-reviewed paper.

You dont need a citation to adhere to the laws of physics, unless you are going to imply that fat bodies can produce fat from nothing?

That would require a citation and an update to textbooks globally.

It’s not about the laws of physics; it’s about whether any “lifestyle change” has been shown, in a peer-reviewed study, to reliably make obese people non-obese over the long term.

You’re correct, of course, that if I were to eat literally nothing, I would lose weight. But that approach is not viable or healthy in the long term.

Too many diet advocates still believe in the myth and that weight is a simple matter of input and output. But real human bodies are far more complex systems.

If you and I eat the identical calories, and then do the identical exercise routine, it doesn’t follow that we’re going to retain exactly identical calories at the end of the process. It’s possible that my body will store some of the calories your body will decide to burn (or vice versa), for example.

From an editorial in the New England Journal of Medicine (emphasis added):

   Many people cannot lose much weight no matter how hard they try, and promptly regain whatever they do lose…

   Why is it that people cannot seem to lose weight, despite the social pressures, the urging of their doctors, and the investment of staggering amounts of time, energy, and money? The old view that body weight is a function of only two variables – the intake of calories and the expenditure of energy – has given way to a much more complex formulation involving a fairly stable set point for a person’s weight that is resistant over short periods to either gain or loss, but that may move with age. …Of course, the set point can be overridden and large losses can be induced by severe caloric restriction in conjunction with vigorous, sustained exercise, but when these extreme measures are discontinued, body weight generally returns to its preexisting level.

So I think either you have to conclude that the doctor editing the weight issue of one of the most prestigious medical journals in the world is an idiot who has no clue about how bodies regulate weight (and ditto for the many medical researchers who’d agree with his statement); or you have to admit that your view of how weight loss works is not, in fact, the only view that a reasonable, educated person could hold.

Question tangential to the topic: is there research on lifestyle changes that don’t necessarily reduce weight long term, but do have a measurable impact on more objective measures of health (things like lifespan, etc)?

It seems obvious to me that weight is at best a measure of health that’s a step or two removed from the health of the patient, but I feel like everyone is focused on weight to the excusion of the actually directly important measurable variables.

There’s definitely research showing that exercise is linked to a longer lifespan. Someone who’s very sedentary will gain (on average) about 4 years of life if they start walking or biking 20 minutes a day. However, there are diminishing returns on adding more exercise, so if someone is already exercising five hours a week, they might not gain that much by increasing that to ten.

I haven’t looked up the research on diet and lifespan, so can’t speak to that.

A couple of refs:

Does Physical Activity Increase Life Expectancy? A Review of the Literature.

Leisure Time Physical Activity of Moderate to Vigorous Intensity and Mortality: A Large Pooled Cohort Analysis

I recently made some posts about “non-exercise activity” (stuff like pacing around and standing as opposed to sitting), which has been linked directly to a bunch of health outcomes besides weight, and may also shed some light on the mysteries of weight loss.  Here’s my main post and there are follow-up posts in this tag.

(via barrydeutsch)

oligopsonoia:

nostalgebraist:

After learning that the pictures from @moehistory​ were from a real free-to-play cellphone game, I of course got curious and downloaded it.  The structure of the game felt really weird to me – sort of “overcomplicated yet empty.”  I had the vague impression that Cookie Clicker was similar, so I googled it and apparently this is a recognized genre called an “idle game”

In Sid Story, for example, there is no basic underlying mechanic besides clicking on things.  There are characters with stats and abilities and EXP like in an ordinary RPG, but the battles are non-interactive – you just click on an enemy and watch the characters automatically choose moves.  There is a sequence of “missions” to progress through, but all you have to do is click on the next mission and then go do other things while a timer (30 mins - 2 hrs) counts down to completion.

The closest thing to an actual mechanic is a minigame with a sliding bar like the attack bar in Undertale.  (This makes some sense as a representation of striking a target, as in Undertale.  In Sid Story it represents … uh … writing romantic “confession letters” to new characters in order to increase their “affection level” so they join your party.  Naturally.)

But there is a very complicated structure of systems built on top of this non-gameplay.  For instance, writing a confession letter requires “Magic Paper.”  To buy Magic Paper, you need “crystals.”  You can get crystals by watching ads, paying real money, or joining a “circle” with other players; members of a circle get paid crystals on Sundays, and they get more crystals if the circle collectively gets more “circle points” that week.  You get circle points from various game actions, but they have their own mechanics (easier enemies will give you less EXP yet typically give you more circle points).  In addition to crystals, there are at least two other currencies, “tickets” and “gold.”  There’s a crafting system.  There is a system for “voting on the next student council president,” which I didn’t even touch, although presumably it does something.  And so on.

Despite how empty the game was, I figured it might be a fun way to waste time.  I finally uninstalled it when, as I was clicking off battles, the game told me I couldn’t fight anymore because I “didn’t have enough MP.”  I hadn’t even realized I had MP.  I was about to look up how to get more MP on the Wikia and then said “no, no more, this is the last straw”

In the circle I joined, there was an internal message board, and the circle’s leader (whatever that means) was saying he would have to step down, because he’d gotten a promotion at work and he just couldn’t put in the time anymore.

Apparently there are other “idle games” with very similar structures, but Sid Story feels especially weird to me because i has elaborate and lavish graphics and sound.  (It even has voice acting!)  Presumably this is just to draw more people to the game, and may well have been a profitable investment.  Still, I’ve never seen a large gap in quality between the sensory presentation of a game world and the mechanical simulation of that world.

This reminds me very much of HabitRPG, though of course with that you provide you own preconditions for making the clicks.

Yeah, formally they’re very similar.  Although in HabitRPG, I find that the extra structure helps the game achieve its intended purpose, since it creates nontrivial game goals (“prevent my character from dying until get a health refill from leveling up”) that are psychologically distant from the underlying life goal (“make myself do this thing more”).  A mere habit-tracking app would always have me thinking about the goal of “being more responsible,” which is then subject to all the usual deflections (“I’ve gotten something done today, I can afford to relax now”), where in HabitRPG I care about points and quests and if I have to do a lot of things in one day to survive the boss’ next onslaught then well so be it.

Pokemon Go is much less complicated, but there’s something similar there – even if you’re just using it as an incitement to walk outside more, it’s more motivating than a mere pedometer because it provides you with goals that aren’t just Do Something Virtuous.  I find myself staying out for longer because I’m no longer asking myself “have I done enough of this yet”?

I guess this is what “gamification” is supposed to be about, although I hadn’t realized until now how important it is to have in-game goals that are distinct from, and abstracted from, the self-improvement goal.  (Whatever “points” you get from the self-improvement goal need to feed into higher-level mechanics where the in-game incentives live.)  Of course there is a tradeoff here because adding more of this distance will increase the likelihood of perverse incentives.  (HabitRPG sometimes makes me want to do things at the end of the day when I’m tired and should really go to bed, and Pokemon Go can make me stay out even when I’m thirsty and should be home hydrating.)

(wb, btw!)

(via oligopsonoia-deactivated2017053)

greencerenkov:

nostalgebraist:

mercurialmalcontent replied to your post “The “sitting is bad for you” research is part of a larger area…”

I have a lot of eyebrow raising at the idea that standing is ~so healthy~ after having worked retail. 5 days a week of standing for 7.5 hours a day really takes the shine off the idea.

*nod*  Most of the “standing is healthy” evidence takes the form of correlations within large populations, which (1) doesn’t mean standing is healthy for every individual, and (2) doesn’t capture causality.  Like, we can check correlations and see that people currently working jobs that make them stand tend to be healthier*, but then, people are less likely to currently have standing jobs if standing jobs are noticeably bad for their bodies.  There is some evidence that making people stand more at work makes them healthier, but again that’s all in aggregate, and is totally compatible with there being a bunch of people for whom standing at work is health-neutral or actively unhealthy, so long as it’s healthy enough for the other people to produce an average positive effect.

(I really wish the researchers on this kind of thing would provide more detailed statistics for their data.  Sometimes you hear about “responders” and “non-responders” but often you get as little information as they can possibly give.  I was complaining about this kind of thing a few days ago)


*technically, this tends to be done indirectly: people with more NEAT are healthier on average, and difference in occupation accounts for a large fraction of the population difference in NEAT (and just standing in one place involves more NEAT than sitting), so presumably (?) a lot of the healthier, higher-NEAT people have higher NEAT because of their jobs.  That’s not an airtight inference, since it’s conceivable that the health effects might go away if you control for non-occupational NEAT, but I would be surprised if the researchers haven’t looked into that question (I am not an expert, almost everything I know about this comes from Google Scholar searches over the last few days)

This might be overly simplistic, but speaking personally: standing in place is a lot more difficult for me than *walking*. I haven’t looked at any science about this but I can easily backpack 10-15 miles in a day, and I’ll be tired and maybe sore, but not in pain. On the other hand, when I was a part time cashier, 4 hours of standing at a time was hell on my knees and lower back. So perhaps the benefits of standing are in fact the benefits of walking around?

I haven’t looked for research on this question specifically, but this makes sense, and James Levine himself prefers a treadmill desk to a standing desk:

“My computer is stationed over the treadmill,” he said. “I work at 0.7 miles an hour.”

A stand-up desk might seem simpler, but he prefers the treadmill.

“Standing still is quite difficult,” he said. “You have a natural tendency to want to move your legs. Zero point seven is the key. You don’t get sweaty, you can’t jiggle too much. It’s about one step a second. It’s very comfortable. Most people seem to like it around 0.7.”

(via moralitybog)

earnest-peer:

jadagul:

nostalgebraist:

thirqual:

jadagul:

Erza Klein has a really interesting article on Hillary Clinton, and why people who work with her in person like her so much more than everyone else does.

The answers startled me in their consistency. Every single person brought up, in some way or another, the exact same quality they feel leads Clinton to excel in governance and struggle in campaigns. On the one hand, that makes my job as a reporter easy. There actually is an answer to the question. On the other hand, it makes my job as a writer harder: It isn’t a very satisfying answer to the question, at least not when you first hear it.

Hillary Clinton, they said over and over again, listens.

I found that answer satisfying, because it fits in with how I understand people. People liked being listened to. Hillary is good at listening; Bill is good at projecting the appearance of listening in superficial interactions.

Sorry, but why is the conclusion not that people that work with her have been coached to answer the same thing to those questions, which would make perfect sense, sorry, which would be step #0 for image management?

Many of the people interviewed aren’t current staffers, and some of them only worked with her quite a long time ago (like Sara Rosenbaum in 1993-4).  It’s possible that they’re all following the script in order to curry favor with Clinton in anticipation of her probable presidency, but that’d be a startling level of obeisance compared to (as an opposite extreme) what fellow Republicans have said about Tez Cruz.

That said, the tone of Klein’s article does feel quite … hagiographic, in a way that makes me skeptical.  It makes me think of Steve Jobs’ “reality distortion field.”  And of course Klein may have his own political reasons.

Huh, I didn’t find it that hagiographic. It did a very nice job of stopping about ¾ of the way through to remind me of all the bad things and screwups she’s been involved in (especially Iraq).

It probably does matter that I’m fundamentally disposed to like Clinton–she’s a detail-oriented centrist technocrat wonkish pro-status-quo establishmentarian, which is about what I look for in an executive.

The reality distortion field stuff is interesting, because Hitchens described the Bill Clinton White House as “cultish”, specifically pointing out that former staff often behaved like ex cult members, saying things like “I don’t know why I believed, but I believed”.

This is of course very different from Hillary, if people still praise her ages later.

Also I recall an article of Secret Service members singing a different song about her, that for them she was an awful boss.

(via earnest-peer)

veronicastraszh:
“ nostalgebraist:
“ zjemptv:
“ nostalgebraist:
“ baroquespiral:
“ nostalgebraist:
“ Thanks, Zinnia, I am sure it will come as a shock to her that there are treatments for OCD, given that mysteriously OCD somehow continues to affect...

veronicastraszh:

nostalgebraist:

zjemptv:

nostalgebraist:

baroquespiral:

nostalgebraist:

Thanks, Zinnia, I am sure it will come as a shock to her that there are treatments for OCD, given that mysteriously OCD somehow continues to affect her life despite being treatable, I mean lol that’s weird right? also I bet she has not tried yoga, have you suggested that

holy fucking SHIT whatever else anyone thinks of the original article, how privileged do you even have to be to look at a price tag ranging from $20000 to $50000 from what I’ve looked at and a) go “There is that, but” b) then start talking about life-year bullshit

the only other place I ever hear this “adjusted life-year” stuff is around Effective Altruism, where it’s used in relation to the fact that poor people lose QALYs over all kinds of stuff that could be resolved if they had more money, which is why you give money to them.  and we live in a culture where any kind of cost can be abstractly converted to money, where economic analysts convert the costs of climate change and war and genocide to price tags so yeah you could argue with anything that… poor people are wasting money by being poor, if they really didn’t want to be poor they could just stop wasting all that money! but jesus fuck what kind of Dickens villain would you have to be to do that.

I mean people these days usually go to college specifically because it will allow them to make way more money down the line, and go into incredible debt expecting to be able to pay it back with that money in the future, and there are still people who can’t afford college.  you could probably do something like this with home ownership, not that I’d know, I’ve never really sat down to calculate that in terms of fucking QALYs because all I know is for the foreseeable future, I can’t afford a damn house

Wow, yeah

For easy reference, here are the tweets @baroquespiral is talking about.  The first is a riff on the line from the original piece “[…] there are social and financial repercussions to transitioning that I cannot afford emotionally or financially”

image

I don’t understand why you think suggesting pursuing potentially effective OCD treatments is the equivalent of suggesting someone treat their OCD with yoga. There are a number of established treatments that can be effective and are not related to yoga. They have an evidence base that’s stronger than what you’re apparently trying to connote when you compare this to yoga.

Also, disability-adjusted life years are a metric used by the WHO and other public health organizations. The original poster made reference to the potential quantifiable repercussions of transitioning; I made reference to the potential quantifiable repercussions of not transitioning. This doesn’t really have anything to do with telling poor people to stop being poor or telling people to go to college and it’s pointlessly dismissive to characterize an established metric in this way.

My point was that someone with diagnosed OCD is likely to be aware of the standard treatments.  As with many psychiatric conditions, the treatments are ineffective for some people, are only partially effective for others, and may not mitigate some symptoms as much as others.  The fact that someone’s life is impacted nontrivially by OCD symptoms provides, in itself, very little evidence that they haven’t pursued treatment.

I’ve often seen people use “have you tried yoga?” as a stock example of unhelpful mental illness advice, and the yoga line was just a reference to that.  I didn’t mean to suggest that standard OCD treatments lacked an evidence base, just that “there are treatments for OCD” is unhelpful mental illness advice.

I wrote more about the relevance of DALYs to this particular case here.  I didn’t take @baroquespiral​ to be responding to the invocation of DALYs in itself – I think their point was that it looked like you were doing a cost-benefit analysis (DALYs are commonly used in calculating cost-effectiveness), and telling someone that the benefits are worth the costs is unhelpful when they simply can’t afford the costs to begin with.  (One can be mistaken about the magnitude of the social costs, but “I can’t afford it financially” is a pretty solid barrier, no matter how good the other side looks.)

But wait, where did numbers like 20,000 - 50,000 come from? What are those for, in relation to what Zinnia was talking about? Even if self-medicating, HRT costs nothing like that.

Plus you are making your own assumptions. The original author did not explain her precise financial situation. She never said, “I just cannot afford HRT.” She said there are “financial reprucutions.” Which, yep, there are. That’s not the same thing.

Plus, you know, other people are reading this. If one person gets to publish their “transition is too hard narrative” – well maybe it is for them. I ain’t gonna kick down their door. But maybe they are trapped in psychological self-defeat cycle of their own making. 

I dunno. Neither do you. Neither do THEY, not really. (Sometimes people are wrong.)

Others deserve to hear our side also.

I spent hard decades LITERALLY UNAWARE that I had real options.

You know who first explained my options to me – no lie (I mentioned this before) – it was Zinnia Jones.

So yeah, you can take HRT. You can fix your shit. Sometimes. She told me that. She’s saying those same things now.

If this woman really-really cannot, then so it goes. Sucks for her.

HRT works. If you’re trans, you can probably make it work. No, really you can. It seems so hard, and then you do it, and then it wasn’t so hard at all.

This is a really fucking important message.

if you’re trans, HRT works better than you probably think it will. This is true both body and mind. So many of us wait so long, carving our bodies and longing for the guts to eat a gun. Cuz we have zero hope.

I didn’t think I could. I was planning my fucking suicide. I reached out to Zinnia and she explained a bit about HRT, some real options I could try. I tried them. They worked.

Rob, you know me – I am speaking hard-learned truth.

Trust me, there are reasons we are saying these things.

I respect that.  And I understand, I think, what it was about the original piece that made people angry (not just critical).  Several people said it sounded like it was a celebration of a miserable place they used to be in, and that they’re immensely thankful they’ve left.  Esther said it might be like the way she feels when someone converts to Catholicism.

Much of the reason the original article interested me was that it was a perspective I had never seen before.  I didn’t take it as some sort of “transition: not as great as you’ve been told” thing.  I just had never heard anyone describe being in that position.  (I’m not sure what other people made of it; it came up on my dash a lot and all I remember seeing was a number of tags and comments like “I relate to this.”)

The thing that got me worked up about some of the angry reactions I saw were that they seemed … angry not at the effect the article was going to produce, and not even at the author for the way she chose to write, but angry at the author as a person.  Like, rushing in to pry apart the article and show that somehow her experience didn’t hold up, it was inauthentic or incoherent or like she “ought to know” not to have the reactions she was having.  (I didn’t think this about your, Veronica’s, responses, just Zinnia’s and especially collaterlysisters’.)  It would be bad if this were the only story out there, but the rush to – take it down, debunk it, almost – like there couldn’t even be one such story as opposed to zero – that rubbed me wrong.

I don’t post about everything I see online that I don’t like.  In this case I did get quite worked up about it.  Looking in my drafts I have all of 3 unfinished attempts to explain why (although one is only a few sentences).  I guess it gets back to the stuff in the Sandifer empathy post and the worry that I have to be insincere to be taken as authentic, and that if I try to loosen up and be sincere I will be taken as inauthentic.  It’s the kind of ironic bind that a brain like mine will get just stuck on and stay there.

(via starlightvero)