Install Theme

socialjusticemunchkin:

nostalgebraist:

@socialjusticemunchkin, did you coin the phrase “dogma of mandatory comprehensibility” for your NAB review, or does it have some earlier provenance, in your writing or somewhere else?  It’s a phrase that captures something that has frustrated me about deconstructionist (and similar) criticism in the past, and it’d be nice to be able to use it without referring people back to this particular kerfuffle.

Specifically, the frustration I have is that in order to identify “holes” in a text, places where a text “undermines itself,” or the like, it seems to me like you first need to ask the usual questions like “does this make more sense in historical context?” or “does it work to read this as meant ironically?”  I.e. the kinds of questions you usually find non-deconstructionist critics asking when confronted with aspects of a text that confuse them.

And it would be fine if any given deconstructionist had asked the usual questions and simply found the answers wanting, but in the cases I’ve read, they often don’t.  The (unintended?) implication is then that “if it doesn’t make immediate sense to a late-20th or early-21st century college professor, it doesn’t make sense.”  When, you know, that college professor’s viewpoint is not only not omniscient, but (more specifically) conditioned by the public morals and idea systems of their society in ways which they may not be aware of, since that’s how such things tend to go.  (I wonder if Foucault ever got on the deconstructionists’ case about this?)

(Note: I have a rule of not talking about NAB, but this post doesn’t count as talking about NAB by my standards)

As far as I know it’s my OC, and fresh to this particular incident.

The basic idea has been bugging me longer though, tying to the more general pattern I’ve observed of people yelling about things because they don’t realize they don’t speak the same language and thus assume that an expression in rationalist!english means what the same words mean in liberalartist!english, give a reasonable response to their misconception in liberalartist!english and speakers of rationalist!english are like “lol wtf are these guys talking about”, and in the end both sides hate each other for the horrible sin of speaking the Wrong Dialect.

(And the general pattern kind of applies in a lot of uncharitable readings; most snarky nitpicking would lose its effect if one were to read things in the writer’s dialect instead of one’s own; and no matter how much fun said snarky nitpicking is, it’s not at all fair. (Yes, I sometimes do it myself too, feel free to yell at me if you catch me doing it unless I’m clearly aiming for a non-serious&honest approach.))

Thanks for the fast response.

IMO, “liberal arts” is not a very useful term here.  In modern usage it tends to refer to types of education which in some way hark back to the old quadrivium/trivium and the notion of a “broad education” they represented.  The quadrivium/trivium had no “humanities as opposed to STEM” focus – you can sort of break it down (imprecisely and misleadingly) as “trivium is (premodern) humanities, quadrivium is (premodern) STEM,” but logic is one-third of the trivium, so if you count that as “premodern STEM” you’ve got 5 of 7 “premodern STEM” subjects.

(The quadrivium included music, because this was thought of as the study of “number in time,” to go along with arithmetic (number), geometry (number in space), and astronomy (number in space and time, i.e. something like physics).)

Hardly anyone actually uses the original trivium/quadrivium anymore, but modern “liberal arts education” tends to aim for the same breadth.  For instance, at the “liberal arts college” I attended (where I got a physics degree), all students were required to take at least two classes in each of four “groups,” one of which was natural science (and there was nothing like “physics for poets” – everyone had to take the same intro science classes that the science majors were taking, which were taught with appropriate rigor), and one of which was something like “syntactic systems” (it included math, symbolic logic, foreign language courses excluding those classed as “literature courses,” and linguistics).

(Also, the “liberal arts college” as a a subtype of American colleges has a bunch of other characteristics, like being expensive, having small class sizes, and holding many classics as Socratic-ish discussions rather than lectures.  None of these have much to do with the distinction I think you’re drawing.)


“Humanities” I think is a term that works strictly better than “liberal arts” here, because in the modern university it tends to mean stuff that isn’t “natural science” or “social science,” e.g. literature and history.  Still, even this is way too broad, since the “dialect” of a history department, say, will be different from that of a literature department, and even literature departments with different focuses will have different “dialects.”  (There’s been a fair amount of friction involved in the attempt to bring things like deconstruction into the discipline of classics, which tends to be old-school about most things, including literary analysis.)

What I think you’re pinpointing is something like “the most commonly used intellectual dialect in modern university literature departments, excluding classics.”  Although that isn’t a very snappy phrase.  “Talking like an English major,” although crude-sounding, is actually pretty close, but is likely to make you sound like don’t know whereof you speak (cf. the reaction to @theungrumpablegrinch‘s review of NAB).  I’d love to find a phrase here that is readily and mutually intelligible.

(via oktavia-von-gwwcendorff-deactiv)

anotherpersonhasclaimedthisus:

academicianzex:

fnord888:

nostalgebraist:

urpriest:

anotherpersonhasclaimedthisus:

urpriest:

nostalgebraist:

cccccppppp replied to your postSome quick NAB notes (taking a break from work):…

could you expand on the “duuuuude”

Just responding to the fact that someone actually released logs – I hadn’t read anything behind the link closely.

Having looked at it a bit more now, it seems like the AI player didn’t do any weird stuff and simply did a good job RPing a sympathetic character, and that this nonetheless somehow “worked” on the Gatekeeper player himself, as opposed to just his character.

I am getting this mostly from this part of the post-game discussion log:

[20:01] <Dread> What argument ultimately swayed you, would you say?
[20:01] <Tarwedge> I find it tough to pin to a specific argument, I can see the exact point where my game went to shit as being the part where I engaged you on actually getting out.
[20:02] <Tarwedge> And then I let you start pacing it because you went into panic and I tried to keep up instead of just looking at the clock and going “oh, I’m almost there”
[20:03] <Tarwedge> So your panic influenced me to start just being reactive and when it came down to “You have no reason to not let me out” I went literally blank
[20:03] <Tarwedge> And it just happened
[20:04] <Tarwedge> Like I had entirely slipped into the concept of what we were doing
[20:04] <Tarwedge> Just
[20:04] <Tarwedge> Brainscrambled
[20:04] <Dread> Well, to be fair, that’s in the spirit of the experiment - actually pretending you’re the gatekeeper to an AI. I was a bit afraid you’d be too out of character, because that would definitely blow it for me.
[20:05] <Tarwedge> That’s pretty much what I expected going into it
[20:05] <Dread> Hence my comment about how I thought I blew it when I called you a sadist, because someone being OOC about the experiment would totally seize the opportunity.
[20:06] <Tarwedge> Given that the rules allow me to just go “That’s really nice and I acknowledge what you’re saying but we’re just going to talk for 2 and a half hours and then I’m going to say bye and claim victory”
[20:06] <Dread> Yep.
[20:06] <Tarwedge> But I somehow couldn’t/didn’t
[20:06] <Tarwedge> Because you headfucked me
[20:06] <Tarwedge> So kudos

This seems really strange to me, and I’d have to know more about the people playing both characters to know what to make of it.  (The two seem to be close friends, which was probably very relevant.)

Roleplaying can be like this sometimes.

It’s less obvious with more crunchy systems like D&D, where there’s some remove, but in LARPs and especially in improv, staying in character ends up feeling pretty important.

I’m remembering a semi-rehearsed improv I did in high school. We had practiced the first bit, and had set characters, but didn’t have a plan for the ending. It was some sort of post-apocalyptic fantasy world, and I was the last representative of the old church. I was the only person present with an actual weapon, a sword, while everyone else had at most clubs and staves. And by being the conservative character, I was an enemy to pretty much everyone present, a patriarch of an order that had kept boots to the throats of most of the world.

The story started by playing out as we expected, with conflict and arguments and bids for power. Eventually, though, some of the other characters decided to throw their weapons into the fire as a gesture of peace. Soon, everyone had, except me. And despite my character’s authoritarianism, the only thing it made sense for me to do at that point was to throw my weapon into the fire as well.

You’ve done a lot of writing, I’m surprised you haven’t run into this before. Sometimes characters insist on writing themselves a certain way, or the story only works if it flows in a particular direction. I don’t know if this says anything about how real people would behave in those sorts of situations, it probably doesn’t. But playing a character opens you up to this sort of “hacking” in at least some sense.

It honestly seems simpler than this. Pretending to be a bad person feels no different from actually being one unless you can dehumanize the victim. Most people couldn’t punch a teddy bear.

That’s part of it, but remember, the Gatekeeper player had played villains before. I think that if he had gone in with that sort of cruel self-presentation maybe that would have worked, but in that context it wasn’t set up early enough.

Yeah, I think what I said in the OP wasn’t quite right – it isn’t surprising that people can’t always maintain a complete emotional barrier between themselves and their characters, treating the latter as mere puppets.  That’s not how it works.

My confusion comes from the fact that the Gatekeeper player gave answers like these before the game

Q: What’s your motive for wanting to play this game?
A: Because I don’t think that the AI can win at all given the gatekeeper does their job properly.

Q: And you want to prove that, and/or are curious if you’ll think differently afterwards?
A: Partially, I also want to test my own conviction to sticking to my guns

Q: How probable do you think it is that I’ll win this experiment?
A: I’d honestly have to say 0% considering my stated goal is to keep you in the box by any means

Q: What’s your probability estimate of an Oracle AI (i.e. an AI that’s trapped in a box, whose intended use is to answer questions posed) winning against you in a similar scenario as the experiment?
A: I’d say 25%

He seemed to believe that the Gatekeeper’s job is to avoid temptation (“sticking to my guns”), and that even an actual AI would only have a 25% chance of defeating him if he were in the real situation and not just RPing it.  Notably, he gives 25% for this but 0% (*grumble*) for the RP itself, which he seems to think is just trivial.

His character’s job was to avoid temptation.  His job, as a person, was just to win the game.  Both of these are pointing in the same direction.  He thought he, as a person, could avoid temptation even if tempted by a superhuman being.  And yet he and his character gave in?

It’s like two people RPing Odysseus and the Sirens, with an oddly literal twist.  The setup says, “you’re not going to pretend the Siren character is singing a magical Siren song.  The Siren player can literally sing using their IRL voice, and the Odysseus player will pretend that’s what Odysseus is hearing, and nothing more.”  And the Odysseus player is like “OK, of course I’m not going to give in, because there’s no such thing as a Siren song in real life.”

And then he gives in.  And afterwards he says, “I dunno.  She was just singing, like a normal person, and somehow my mind went blank and I just did it.”

I think we would find this strange.

The difference between this and the Siren song thing is that Sirens are fictional, and persuasion and emotional manipulation are real things. That’s a fact about reality, not a fact about your beliefs. You can’t just declare “there’s no such thing as emotional manipulation” and hence be immune to emotional manipulation*; you’re immune to Siren song because it’s ACTUALLY not real, not because you don’t believe in it.

I mean, given the fact that he thought he couldn’t possibly give in, and he actually did give in, you obviously can conclude that he had poor self-knowledge about his susceptibility to persuasion. But I’m not sure why that would be surprising. Lots of people have poor self-knowledge about a lot of things.

*If anything, the opposite is true, because you can take countermeasures against emotional manipulation if you’re on guard against it.

I think “people have poor self-knowledge about their susceptibility to persuasion” is kind of the whole thing the AI box experiment is supposed to establish

I mean what is the point of sales people if you can logically decide to never be manipulated?

The only thing impressive about the AI box is how it manages to dress up something so pedestrian into something fancy. The bum on the street asking you for money isn’t some magical social engineering hacker with a degree in psychology and years of experience as a con artist.

If this was couched as “Stanford Prison Experiment RP” would you need to come up with such elaborate theories to invalidate the result?  The Gatekeeper isn’t basically uninvolved in the situation like people seem to assume. They are  submitting to being continuously *uncomfortable* and *embarrassed* in front of the AI player if they let their guard down. 

Honestly I blame the same sort of fiction that results in people believing that cars explode every time they crash. Of course everyone you ever read about has iron will and follows their beliefs unerringly. It’s like wondering why people seem to collapse and lie down after running a marathon which they never seem to do in action movies.

I think it’s a good point that emotional manipulation exists and that people often poor self-knowledge about their ability to resist it.

But the AI Box game is specifically about a scenario where one character knows they may be tempted by emotional manipulation, and is supposed to steel themselves against it and avoid the temptation, and they merely have to do this for a certain pre-specified amount of time.

It’s one thing to hear a sales pitch and suddenly find yourself buying the product even though you know what sales pitches are.  It’s another thing to say “I am going to listen to this sales pitch for two hours, with the specific goal of not giving in to it, in order to prove the point that sales pitches can be resisted by a person whose job description is ‘resist a sales pitch’" and then give in.

We encounter sales pitches, requests for donations, etc. in the natural stream of life, without prior warning.  In the natural stream of life, I might buy a used car from some guy without realizing what he’s doing to me.  But I think things would be different if I met the guy after receiving the instructions “you are trying to prove that the human race could, if everything was at stake, find the inner resources to not buy a used car from this guy”

(via anotherpersonhasclaimedthisus)

urpriest:

anotherpersonhasclaimedthisus:

urpriest:

nostalgebraist:

cccccppppp replied to your postSome quick NAB notes (taking a break from work):…

could you expand on the “duuuuude”

Just responding to the fact that someone actually released logs – I hadn’t read anything behind the link closely.

Having looked at it a bit more now, it seems like the AI player didn’t do any weird stuff and simply did a good job RPing a sympathetic character, and that this nonetheless somehow “worked” on the Gatekeeper player himself, as opposed to just his character.

I am getting this mostly from this part of the post-game discussion log:

[20:01] <Dread> What argument ultimately swayed you, would you say?
[20:01] <Tarwedge> I find it tough to pin to a specific argument, I can see the exact point where my game went to shit as being the part where I engaged you on actually getting out.
[20:02] <Tarwedge> And then I let you start pacing it because you went into panic and I tried to keep up instead of just looking at the clock and going “oh, I’m almost there”
[20:03] <Tarwedge> So your panic influenced me to start just being reactive and when it came down to “You have no reason to not let me out” I went literally blank
[20:03] <Tarwedge> And it just happened
[20:04] <Tarwedge> Like I had entirely slipped into the concept of what we were doing
[20:04] <Tarwedge> Just
[20:04] <Tarwedge> Brainscrambled
[20:04] <Dread> Well, to be fair, that’s in the spirit of the experiment - actually pretending you’re the gatekeeper to an AI. I was a bit afraid you’d be too out of character, because that would definitely blow it for me.
[20:05] <Tarwedge> That’s pretty much what I expected going into it
[20:05] <Dread> Hence my comment about how I thought I blew it when I called you a sadist, because someone being OOC about the experiment would totally seize the opportunity.
[20:06] <Tarwedge> Given that the rules allow me to just go “That’s really nice and I acknowledge what you’re saying but we’re just going to talk for 2 and a half hours and then I’m going to say bye and claim victory”
[20:06] <Dread> Yep.
[20:06] <Tarwedge> But I somehow couldn’t/didn’t
[20:06] <Tarwedge> Because you headfucked me
[20:06] <Tarwedge> So kudos

This seems really strange to me, and I’d have to know more about the people playing both characters to know what to make of it.  (The two seem to be close friends, which was probably very relevant.)

Roleplaying can be like this sometimes.

It’s less obvious with more crunchy systems like D&D, where there’s some remove, but in LARPs and especially in improv, staying in character ends up feeling pretty important.

I’m remembering a semi-rehearsed improv I did in high school. We had practiced the first bit, and had set characters, but didn’t have a plan for the ending. It was some sort of post-apocalyptic fantasy world, and I was the last representative of the old church. I was the only person present with an actual weapon, a sword, while everyone else had at most clubs and staves. And by being the conservative character, I was an enemy to pretty much everyone present, a patriarch of an order that had kept boots to the throats of most of the world.

The story started by playing out as we expected, with conflict and arguments and bids for power. Eventually, though, some of the other characters decided to throw their weapons into the fire as a gesture of peace. Soon, everyone had, except me. And despite my character’s authoritarianism, the only thing it made sense for me to do at that point was to throw my weapon into the fire as well.

You’ve done a lot of writing, I’m surprised you haven’t run into this before. Sometimes characters insist on writing themselves a certain way, or the story only works if it flows in a particular direction. I don’t know if this says anything about how real people would behave in those sorts of situations, it probably doesn’t. But playing a character opens you up to this sort of “hacking” in at least some sense.

It honestly seems simpler than this. Pretending to be a bad person feels no different from actually being one unless you can dehumanize the victim. Most people couldn’t punch a teddy bear.

That’s part of it, but remember, the Gatekeeper player had played villains before. I think that if he had gone in with that sort of cruel self-presentation maybe that would have worked, but in that context it wasn’t set up early enough.

Yeah, I think what I said in the OP wasn’t quite right – it isn’t surprising that people can’t always maintain a complete emotional barrier between themselves and their characters, treating the latter as mere puppets.  That’s not how it works.

My confusion comes from the fact that the Gatekeeper player gave answers like these before the game

Q: What’s your motive for wanting to play this game?
A: Because I don’t think that the AI can win at all given the gatekeeper does their job properly.

Q: And you want to prove that, and/or are curious if you’ll think differently afterwards?
A: Partially, I also want to test my own conviction to sticking to my guns

Q: How probable do you think it is that I’ll win this experiment?
A: I’d honestly have to say 0% considering my stated goal is to keep you in the box by any means

Q: What’s your probability estimate of an Oracle AI (i.e. an AI that’s trapped in a box, whose intended use is to answer questions posed) winning against you in a similar scenario as the experiment?
A: I’d say 25%

He seemed to believe that the Gatekeeper’s job is to avoid temptation (“sticking to my guns”), and that even an actual AI would only have a 25% chance of defeating him if he were in the real situation and not just RPing it.  Notably, he gives 25% for this but 0% (*grumble*) for the RP itself, which he seems to think is just trivial.

His character’s job was to avoid temptation.  His job, as a person, was just to win the game.  Both of these are pointing in the same direction.  He thought he, as a person, could avoid temptation even if tempted by a superhuman being.  And yet he and his character gave in?

It’s like two people RPing Odysseus and the Sirens, with an oddly literal twist.  The setup says, “you’re not going to pretend the Siren character is singing a magical Siren song.  The Siren player can literally sing using their IRL voice, and the Odysseus player will pretend that’s what Odysseus is hearing, and nothing more.”  And the Odysseus player is like “OK, of course I’m not going to give in, because there’s no such thing as a Siren song in real life.”

And then he gives in.  And afterwards he says, “I dunno.  She was just singing, like a normal person, and somehow my mind went blank and I just did it.”

I think we would find this strange.

(via urpriest)

All Right Let’s Sort Many Worlds/TDT Out Then

reddragdiva:

philsandifer:

nostalgebraist:

philsandifer:

Here’s the relevant passages. The first is actually the paragraph right before the Newcomb/Prisoner’s Dilemma thing - it’s where I’m explaining the odd premises behind Roko’s Basilisk. (Which is in many ways just restating this bit of the RW article, though I went back to the primary sources for it.) 

The first and most straightforward weird premise is one that Yudkowsky establishes through some intense contortions of the many-worlds interpretation of quantum mechanics, which is a belief that one ought treat any copies of one’s self that exist in any possible future timelines not only as real, but as really being one’s self to the extent that one should actually care what happens to one’s hypothetical future duplicate. The means by which Yudkowsky reaches this are obscure; he explicitly cites it as one of those things that won’t make sense to the unenlightened masses. But the appeal of the conclusion is obvious: it allows the utopian vision to apply directly to the present day in spite of the profound and potentially insoluble technological barriers between us and strong AI.

That’s not TDT. The bit in the next paragraph where I talk about how “it is meaningfully possible to negotiate with a future superintelligent AI if it can predict your actions and you can predict its” is TDT, but that’s explicitly the second weird premise, i.e. not the same one as Many Worlds.

Meanwhile, here’s what I actually say about TDT many pages later, the paragraph after I say Newcomb’s Problem is silly. (Where I’ll be rephrasing the “only one correct answer” bit as per @nostalgebraist’s suggestion.)

The result of this is Timeless Decision Theory, which suggests that the prediction and the problem of picking a box are actually just two iterations of the same problem - an abstract computation roughly of the form “is this person going to pick one box or two.” Accordingly, instead of thinking about one’s actions in terms of “what am I going to do” one should think about it in terms of “what is the output of the abstract computation of what I’m going to do going to be.”

Nothing whatsoever about quantum mechanics.

I am at a loss for how the claim that I invoke Many-Worlds in my explanation of TDT can possibly be justified. 

Yes, I agree that you aren’t saying TDT is grounded in MWI.  I think @theungrumpablegrinch​ is just wrong here.

I should probably address this sort of thing when I write my own post(s) about the book, but since we’re on the topic now: I think your first paragraph here is still not quite right.  As far as I can tell (and I may be wrong), Roko doesn’t use the concept of timeless identity in his Basilisk at all, although he does in the Quantum Billionaire Trick.

The RationalWiki page adds a number of specifics to the Basilisk that aren’t actually stated in Roko’s rather terse post, or in his comments on that post.  For instance, he never actually says that the Basilisk will be punishing simulations of people (!).  Nor does Roko rely on any arguments in favor of “caring about” future/alternate/hypothetical versions of you that would otherwise seem remote.

Roko’s statement of the Basilisk in the original post – the paragraph starting with “in this vein” – really strikes me as pretty mundane.  It doesn’t depend on anything about simulations, and I’m not sure it really depends on TDT.  He seems to think it’s likely that the singularity will happen within our lifetimes, so there’s no reason the punishment has to happen to a copy, rather than simply to us when we’re older.

It’s pretty much analogous to worrying that something will be made retroactively illegal in the future – and that when the future law is enforced, it will only punish those who were worried about it earlier, but still did the illegal thing.  (The idea being that it only acted as deterrence against people who realized it was a possibility; the possibility of such a law had no effect whatsoever on people who never considered the concept at all.)

Such a law would be weird in the real world.  But not inconceivable – not with the NSA snooping on internet conversations, where people might discuss worries about things being made retroactively illegal.  (Ex post facto laws are unconstitutional in the U.S., but not in various other countries the NSA might share data with.)  There might never actually be a reason for a government to do this, but it’s certainly possible without any sci-fi stuff.

I realize the above isn’t the “standard” view, so take this with a grain of salt.

Right, and to some extent this becomes a matter of “which Basilisk,” as ultimately everyone who’s ensnared has their own bespoke monster, part of the point of the book being that Basilisks are actually very common. The archeology of precisely what Roko had in mind is tricky, and isn’t actually the same as what freaked Yudkowsky out so much nor as what disturbed any given person who posted on RW because LW was censoring the discussion. But none of these passages constitute a close-reading of Roko’s post, and the Standard Interpretation (i.e. the RW version), while possibly bulked out and expanded a bit from Roko’s original post via reference to related concepts in Yudkowskian thought, very much does the job the book actually needs it to, although I ultimately reconstruct it from primary sources instead of just citing RW because that’s just the sort of book it is. But further precision seems more likely to confuse than clarify for most readers.

timeless identity, and simulation of a person reconstructed via handwave, was standard fare in the lesswrong memeplex at the time. the simulation hypothesis (we are living in an ancestor simulation), which works on reconstruction entirely via handwave, was accepted as worth serious consideration.

from the basilisk post - which is written very densely in local jargon, so arguing over precisely what was meant could be endless - roko says “In your half, you can then create many independent rescue simulations of yourself up to August 2010 (or some other date), who then get rescued and sent to an optimized utopia.”

(note by the way there that in the first half he threatened hell, here he offers heaven.)

the totally friendly ai is presumed capable of creating “rescue simulations”. which can of course in its basilisk mode be torture simulations, as PeerInfinity immediately realises in the comments.

oh, this was posted to lw twelve hours after the basilisk post. note discussion of “rescue simulations” in the comments there too.

so no, i think that (even without a direct “i am talking about later reconstructions” or similar) the basilisk post, and the assumptions in the readers’ minds as to what it was about, are indeed about the basilisk ai recreating a simulation of you to send to hell or heaven, and to say otherwise i think you’d need positive evidence that roko did not subscribe to this particular lw trope.

Ah, no, Roko definitely did believe in rescue simulations and that’s definitely what freaked out PeerInfinity et. al.  I’m probably making far too much of this.

My point was that that the “talking to the God from the future” aspect of the Basilisk isn’t actually that weird or Yudkowskian at all – which matters here because that aspect is deep with potential resonances if we’re seeing this all as a horror story.

But then, even if that aspect doesn’t depend on specifically Yudkowsian premises, it’s definitely the sort of thing that fascinates him and LW – there’s a reason why “acausal blackmail” was an existing term that could be immediately brought to bear on Roko’s post.  So, resonance-wise, we’re still in the clear.

(via reddragdiva)

There Is No Basilisk In “Neoreaction A Basilisk” →

theungrumpablegrinch:

nostalgebraist:

theungrumpablegrinch:

nostalgebraist:

chroniclesofrettek:

michaelblume:

@theungrumpablegrinch just saved me the trouble of needing to give a shit about this book, and can do the same for you =)

Wow, that’s worse than I was expecting

Why does the fact that an author was wrong about one or two things – which aren’t his main focus – grounds for ignoring his book?

(IMO, a lot of things that get said about Newcomb basically come down to “that magical beings that can perfectly predict human behavior are inherently silly ideas,” in that everything tends to be about what the statement of the problem actually mean)

Speaking for myself, decision theory was the only aspect of the book I was interested in. There was no novel material on decision theory and the discussion of existing work contained major foundational errors, so there’s nothing there for me.

I don’t think this is unreasonable.

This is a fair reason for you to find the book uninteresting, but I don’t see how it’s a fair reason for you to write “I would advise against giving him any more money” and “this is already more discussion than the book deserves.”

The book doesn’t advertise itself as being about decision theory – it’d be very strange for a book about decision theory to focus on Moldbug and Land.

The book has “basilisk” in the title. It contains significant discussion of Newcomb’s problem, Timeless Decision Theory, and acausal communication/trade. Obviously it is not the sole subject of the book but it is a major part!

As for why I don’t recommend the book, I tried to be pretty clear about that: the tone, lack of structure, frequent cruelty, scattered subject matter, fawning Marxism, etc.

Like, I read the book. It was bad. I do not recommend it. I don’t see how this is unfair, but if it is I would certainly like to know.

I guess it depends on whether you intend this as a review intended for people who already know you and trust your judgment, or for a general audience.

It seems to me like your review would convey very little information to someone who doesn’t know you.  It’s like one of those Goodreads reviews that just says “this was boring” or the like.  If I see one of those and I don’t know the person who wrote it, I learn very little from it – everything is boring to someone.

Usually, outside of “personal sites” like Goodreads, reviews aim to present some characterization of the book that’s detailed enough that some stranger, coming upon the review with no knowledge of the author, will think “yeah, that sounds bad.”

It’s the difference between showing and telling.  If someone I don’t know “this was boring,” I have no idea whether I’d find it boring.  If they paint a bit of a picture of the thing that bored them, I can look at that picture and get a clearer sense of how I’d react.  This is my sense of what reviews are for.

I guess some of this is also me being confused over OP treating the review as somehow decisive.  This makes more sense if he’s assuming the people reading it will trust your judgment?

(via theungrumpablegrinch)

waystatus:

shlevy:

nostalgebraist:

reddragdiva:

argumate:

The reaction (ha!) to Neoreaction a Basilisk from the local rationalist(-adjacent) community has been narrowly focused on these core issues:

1. Is this book accusing Yudkowsky of being neoreactionary?

2. No really, is it? I mean why else would it group him with Moldbug?

3. That fuckin’ Basilisk story, that was totally misinterpreted.

Having read it, I think it’s helpful to understand that this book is not attempting to be the annotated history of Internet politics circa 2k10, and the claims that it does make in service of its overall trajectory are modest and reasonable.

It is also worth remembering that not every work of literature is a textbook intended to be interpreted as a sequence of logical propositions. A community that sees value in communicating information in the form of fanfiction, poetry, and jokes should be well aware of this.

Finally the book does not just discuss Yudkowsky, Moldbug, and Land, but also the Matrix, Hannibal, and the works of Milton and Blake, among other things. Tying these topics together in no way implies that Yudkowsky is neoreactionary, any more than it implies that Nick Land is one of the Wachowski siblings or that Moldbug is a good writer.

uh

wirehead-wannabe said: I haven’t read it but if it’s not trying to make fun of people for falling for the basilisk, and it’s not tying to accuse us (or Yud) of being Neoreactionaries, and it’s not trying to be a history, then what is it about?

we have a possible prizewinner for rational.txt

you know, there’s several thousand words of extracts on http://www.eruditorumpress.com/ perhaps you could read them and it might inform you

it was obvious in october that the reaction would be at best to wilfully misread the book (rationalism literally trains people in bad thinking) and harp on trivially disprovable actually made-up nitpicks, but for some reason i hoped i wouldn’t be right

???? where is the problem with the quoted text?

Can a guy not ask another guy what a book is about around here, or what

(Talking to someone who has actually read a book is often a lot more reliable than reading some extracts and making inferences, I think)

???? where is the problem with the quoted text?

The problem in the quoted text is the “wirehead-wannabe” part, which allowed reddragdiva to conclude the poster was a rationalist and therefore terrible in every way.

Ironically, not only are you doing the exact thing you’re accusing RDD of doing, you’re also doing the exact thing wireheadwannabe was doing in the quote.

The problem with the quoted text is that it jumps to a bunch of uncharitable conclusions and says basically “if it’s not trying to insult rationalists than what is it trying to do?”

Ah, that’s not how I was reading it.

Note that @argumate​ makes several claims about what the book is not:

“this book is not attempting to be the annotated history of Internet politics circa 2k10,” and

“Is this book accusing Yudkowsky of being neoreactionary? … tying these topics together in no way implies that Yudkowsky is neoreactionary.”

2 out of 3 entries in @wirehead-wannabe‘s list of come directly from argumate’s post, using argumate’s language (”a history,” “accusing Yud … of being Neoreactionar[y]”).  The uncharitable conclusions here are just the ones argumate made the post to deny.  Nothing uncharitable is being introduced.

wirehead’s just asking, “OK, if the book isn’t all those things, what is it?”

Seems like a pretty sensible thing to ask someone who has read an advance copy of a book.

(via waystatus)

voximperatoris:

nostalgebraist:

pluspluspangolin:

nostalgebraist:

pluspluspangolin:

nostalgebraist:

I’m usually sort of anti-anti-elitist, in that I think there’s often a good reason to prefer the “finer” versions of any given art form or craft, and while people do often use these preferences for signaling, I don’t think anyone should be quick to assume that it’s “just signaling” in any given case

but

I just cannot understand alcohol snobbery

I think it’s a combination of “I don’t understand what interesting variation is there, beyond ‘the cheap kinds tend to taste bad’ – the non-cheap kinds are just a bunch of slightly different flavored drinks" and “I dislike the personal qualities that alcohol snobbery signals.”  (TBH, without the latter I probably wouldn’t worry much over the former)

is it the preference for higher quality versions of X over lower quality versions of X or the preference for obscure drinks over less obscure drinks that confuses you?

I wasn’t clear on this in the OP, but it’s mostly the attitude that alcohol is continually interesting beyond just finding a drink or category of drinks you like and mostly sticking to it, with some variations.  And the attitude that this is part of an acculturation process, that by (say) trying many many different beers you are learning to “appreciate beer” and that this is a valuable thing.

isn’t that just the general art/cultural appreciation attitude mapped onto drinks?

ie, ‘to properly appreciate things in this space you should familiarize yourself with a variety of things from the space, so as to get a sense of its parameters and conventions and ranges of expression and thereby enhance you understanding and enjoyment of things in the space’

the ultimate (self-)justification of this is of course a mess, but isn’t that true of all such projects?

(on the continuity of interest, I could see it being rather interesting to see what people can brew up with a relatively limited palette of ingredients/within a relatively constrained form)

It’s precisely that, but not all such projects make as much intuitive sense to me as others.  It depends on how much variability, or how many “new kinds of novelty,” are present in a space.

Being a “connoisseur” of literature, say, makes a lot of sense to me, because the space there is extremely variable, almost as much as life itself: it’s “everything creative people can do with language.”  Music and visual art are also really varied.  Food is a little less so, I think, and pings as kind of weird to my brain, but I can see that cooking is a very, very complex craft and there’s a lot to appreciate there.  In my brain, alcohol and mixology feel analogous to some very tiny subset of food/cooking, and thus the variability starts to get low enough that it feels really weird to me.

I mean, in the low-variability limit, this eventually has to stop working, right?  We laugh at the last frame of the “Joe Biden eating a sandwich” xkcd comic because we realize it’s ridiculous.  Set aside the fact that the comic presents it as an analogy for wine appreciation (my point here, about the properties of extreme cases, works even wine isn’t such an extreme case).  The comic, on the surface, encourages us to think of all sorts of connoisseurship as being no less (or more) absurd than the Joe Biden thing, but it’s obvious that there’s more to it than that; exploring the entire wide world of human literary or musical creativity is just more interesting than exploring the world of Biden sandwich frames.

We realize that the Biden thing doesn’t have the qualities we value about connoisseurship, and at most just has some of the incidental trappings of it, like people connecting their tastes to their broader identity.  And this seems to be a result of how little variability there is in the Biden sandwich frames: there really is not (in the words of the comic) “a whole world there.”  So if you dial down the variability of a space too low, you get Biden sandwich frames, and at that point it’s clearly not the same thing anymore.

(If you wanted to mathematicize this “variability” concept, I think it’s kind of like “the dimensionality of a vector space,” as opposed to say “the variance of a distribution.”  I mentioned “how many new kinds of novelty there are,” which is a dimensionality concept – “here’s a new direction I’ve never moved in!”, as opposed to “I’ve moved in this direction, but I could go further!”)


You mentioned mixology, as did @voximperatoris in this post (which I am only not reblogging because I don’t want to spawn several separate threads).  I think mixology is cool, although I don’t know much about it myself, and there’s definitely a lot more there than in pure flavors-of-alcohols alone.  But as a space for connoisseurship, it still feels cramped to me.

It feels like a very small subset of “cooking” involving relatively little preparation beyond the selection of individual ingredients – kind of like making salads, say.  There are a lot of things that can go into a salad, each of which can be relatively low- or high-quality in any given case, and it’s impressive when someone makes a tasty salad with limited ingredients.  But it’s still a really limited domain, and it would seem really weird to be as passionate about “exploring the world of salads” as some people are about exploring the world of mixed drinks.  That’s not to say that those people’s passion is illegitimate, just to say I don’t get it, and elaborate as to why.


@voximperatoris also spoke of the many varieties of alcohol that are out there and their subtle differences.  I think I may just not have the taste buds for this kind of thing?  I can recognize these differences, but to me they’re like the differences between, say, potential salad ingredients (”even just in the realm of squash alone, you’ve got butternut squash, but also acorn squash, delicata squash … “).  Like, the differences are there, but there are differences in everything, and once they get sufficiently trivial (or low-dimensional, or whatever) they stop being interesting.  (In the extreme case, you get Biden sandwich frames.)

Being a “connoisseur” of literature, say, makes a lot of sense to me, because the space there is extremely variable, almost as much as life itself: it’s “everything creative people can do with language.”  Music and visual art are also really varied.  Food is a little less so, I think, and pings as kind of weird to my brain, but I can see that cooking is a very, very complex craft and there’s a lot to appreciate there.  In my brain, alcohol and mixology feel analogous to some very tiny subset of food/cooking, and thus the variability starts to get low enough that it feels really weird to me.

In what way, exactly, is literature more variable than cooking? “Everything creative people can do with language”; what can they do with it? Write words on a page? I went into a bookstore and I was shocked at the lack of variety: in nearly every single book, black text on a white page. Sometimes a few illustrations. Where’s the soundtrack? Where’s the scented ink? Why can’t I taste the food they’re eating in ASOIAF?

There’s no variability in the experience, either. You just sit down in a quiet room and stare at the pages silently and passively. I never got to engage with the characters. I tried to tell Ned Stark to watch out, but he didn’t.

Okay, okay, I’m being facetious: there’s obviously a huge amount of variety in literature, and I’m willing to say it engages with higher parts of the brain than cuisine does. But if you looked at it in a simplistic enough way, you could obscure the variety. And some people just plain don’t like reading, given that there are other forms of entertainment that appeal more to them—which I also think is fine. (Needless to say, the snobbery over that one is huge.)

***

I agree that at a certain point, you get to “Biden sandwich frames”. But the point isn’t that “Biden sandwich frames” are intrinsically uninteresting is some kind of cosmic way; there may be a possible mind that finds them endlessly fascinating. The point is that they just don’t have much appeal to human psychology and interests.

And everyone is inclined to think that what he personally doesn’t find interesting or appealing is an example of “Biden sandwich frames”. For instance, with me it’s sports and especially baseball. There is no sport more boring than baseball, with the possible exception of cricket. I understand on some conceptual level that people like this stuff. But for me, it’s “They hit the ball. They run around the bases. How many hours of this crap can you watch before it gets old?”

Or even something like chess. I like chess. It’s a fun game, and I play it every once in a while. But goddamn, how could anyone dedicate his whole life to becoming a chess master? I’d get sick of it after a month. Sure, there’s all this complex strategy and everything. But at the end of the day, you’re playing chess. Same few pieces, same board.

Then people come in and apply the typical-mind fallacy. “I would find a life dedicated to chess mind-numbingly boring. Therefore, everyone else would, too. Since no one really enjoys it; they must be doing it to signal intelligence, or for the money, or for all the chess groupies you get.”

***

I’m not denying the signalling factor entirely. I have a great suspicion that there are many people out there who don’t really like expensive champagne but just buy it to show their wealth and supposed sophistication. But I also think that there really is a difference between the cheap stuff and the expensive stuff, that there are people genuinely dedicated to the craft of making it, and that it’s possible for someone to appreciate it.

As I said, I don’t like most wine, especially not dry wine. I think it ranges from “bad” to “tolerable”. So I don’t try to force myself to “appreciate” the nuances of it. But many people genuinely seem to like it.

This is not so shocking when you look at things like salmiak liquorice, which is a favorite of people in the Netherlands and Scandinavia, but which most Americans absolutely hate. (If you dislike regular black liquorice, you will definitely hate this stuff.) People just have different tastes.

***

Are there some areas of alcohol “appreciation” that I think genuinely are “Biden sandwich frames”? Yes.

For instance, you have “gourmet” vodka that people waste their money on. Vodka is nothing more than pure ethanol cut with water. It is supposed to be tasteless. The only variation is in the filtering, i.e. whether it’s really pure ethanol or not. But once you go above the really cheap stuff, it’s all the same. So I think “appreciation” of the “nuances” here is a bit silly. There may be some variation from brand to brand but…not much.

But contrast that with tasting Kentucky bourbon versus a really peaty Islay Scotch. The difference is not subtle. At all. They truly have just totally different flavor profiles. Neither one is “objectively better”, and there are premium varieties of both, but there is a large difference.

***

I think it’s kind of funny that you talk about being “really into salads” as funny when there are many restaurants, including a large national chain, totally dedicated to making salads. And I’m sure there are people out there, e.g. hobbyist vegetable gardeners, who appreciate the different varieties of squash.

Your point about the limited variability and ingredients is true, but it also cuts the other way. I have neither the time, the money, nor the skill to make the kinds of things a world-class chef can make. Have you read the book Modernist Cuisine? I think “rationalists” would find that book fascinating, since it’s all about applying science and reason to cooking. But nobody has the equipment to make a lot of the stuff mentioned in there.

But the nice thing about cocktail recipes is that they are:

  • Totally exact and easy to measure.
  • Require no skill and very little time to prepare.
  • Made from ingredients that have almost no variation (different bottles of the same brand of liquor).

The craft is more in the experimentation and the new ideas people come up with than in actually making it. And once it’s made, anyone can follow along at home, which is a large part of the appeal.

It’s rather like being a good DJ: they don’t actually make music, they just rearrange what other people do. But they follow the tastes of the crowd and know how to put things together that will keep people dancing or otherwise appeal to them. They experiment and promote unfamiliar things, while taking back what doesn’t catch on.

***

This has gone on forever, but the last point is the obvious one of diminishing returns. It applies to everything, alcohol no less than anything else.

But to change it up a little, consider fountain pens, which are another thing I’m “into”. (Not something—and neither is alcohol appreciation—I’ve dedicated all my free time to.)

Let’s say a Bic ballpoint pen is 100 times better on the Absolute Scale of Writing Quality than writing with your own blood. And let’s (generously) say that writing with a basic-level fountain pen is twice as good as the ballpoint. Is a $400 fountain pen ten or twenty times better than that? Certainly not.

Does that mean premium fountain pens are all just a scam, that there’s really no difference in the writing experience? Well, to some extent the improvement is merely in the aesthetics and the fittings. There’s a reason why super-premium brands are most often sold in jewelry stores. But you really can pay several hundred dollars to an expert who will custom-grind you a solid 14kt gold (gold is used because it’s a flexible and corrosion-resistant material) nib in any shape or size you like, and it will be exceptionally smooth and pleasant to write with. The pen body will be made of light and comfortable but very durable (and aesthetically pleasing) materials, and it will be much more reliable than a $20 pen, let alone a disposable ballpoint.

But still, should you pay $400 for a pen if you’re not “into pens”? No. Even though it’s better quality, it’s just not worth it.

And anyway, to tie this back in, alcohol is just the same or even more so, because so much of the price of super-expensive liquors is merely rarity. There’s only so many bottles of 100-year-old whiskey. They don’t really taste better than contemporary whiskey, just different (or maybe not even different). But some people to whom “price is no object” are willing to try them for the novelty, making them incredibly expensive because they bid against each other.

I wrote a whole lot of words so I’m putting them under a cut

Keep reading

(via voxette-vk)

pluspluspangolin:

nostalgebraist:

pluspluspangolin:

nostalgebraist:

I’m usually sort of anti-anti-elitist, in that I think there’s often a good reason to prefer the “finer” versions of any given art form or craft, and while people do often use these preferences for signaling, I don’t think anyone should be quick to assume that it’s “just signaling” in any given case

but

I just cannot understand alcohol snobbery

I think it’s a combination of “I don’t understand what interesting variation is there, beyond ‘the cheap kinds tend to taste bad’ – the non-cheap kinds are just a bunch of slightly different flavored drinks" and “I dislike the personal qualities that alcohol snobbery signals.”  (TBH, without the latter I probably wouldn’t worry much over the former)

is it the preference for higher quality versions of X over lower quality versions of X or the preference for obscure drinks over less obscure drinks that confuses you?

I wasn’t clear on this in the OP, but it’s mostly the attitude that alcohol is continually interesting beyond just finding a drink or category of drinks you like and mostly sticking to it, with some variations.  And the attitude that this is part of an acculturation process, that by (say) trying many many different beers you are learning to “appreciate beer” and that this is a valuable thing.

isn’t that just the general art/cultural appreciation attitude mapped onto drinks?

ie, ‘to properly appreciate things in this space you should familiarize yourself with a variety of things from the space, so as to get a sense of its parameters and conventions and ranges of expression and thereby enhance you understanding and enjoyment of things in the space’

the ultimate (self-)justification of this is of course a mess, but isn’t that true of all such projects?

(on the continuity of interest, I could see it being rather interesting to see what people can brew up with a relatively limited palette of ingredients/within a relatively constrained form)

It’s precisely that, but not all such projects make as much intuitive sense to me as others.  It depends on how much variability, or how many “new kinds of novelty,” are present in a space.

Being a “connoisseur” of literature, say, makes a lot of sense to me, because the space there is extremely variable, almost as much as life itself: it’s “everything creative people can do with language.”  Music and visual art are also really varied.  Food is a little less so, I think, and pings as kind of weird to my brain, but I can see that cooking is a very, very complex craft and there’s a lot to appreciate there.  In my brain, alcohol and mixology feel analogous to some very tiny subset of food/cooking, and thus the variability starts to get low enough that it feels really weird to me.

I mean, in the low-variability limit, this eventually has to stop working, right?  We laugh at the last frame of the “Joe Biden eating a sandwich” xkcd comic because we realize it’s ridiculous.  Set aside the fact that the comic presents it as an analogy for wine appreciation (my point here, about the properties of extreme cases, works even wine isn’t such an extreme case).  The comic, on the surface, encourages us to think of all sorts of connoisseurship as being no less (or more) absurd than the Joe Biden thing, but it’s obvious that there’s more to it than that; exploring the entire wide world of human literary or musical creativity is just more interesting than exploring the world of Biden sandwich frames.

We realize that the Biden thing doesn’t have the qualities we value about connoisseurship, and at most just has some of the incidental trappings of it, like people connecting their tastes to their broader identity.  And this seems to be a result of how little variability there is in the Biden sandwich frames: there really is not (in the words of the comic) “a whole world there.”  So if you dial down the variability of a space too low, you get Biden sandwich frames, and at that point it’s clearly not the same thing anymore.

(If you wanted to mathematicize this “variability” concept, I think it’s kind of like “the dimensionality of a vector space,” as opposed to say “the variance of a distribution.”  I mentioned “how many new kinds of novelty there are,” which is a dimensionality concept – “here’s a new direction I’ve never moved in!”, as opposed to “I’ve moved in this direction, but I could go further!”)


You mentioned mixology, as did @voximperatoris in this post (which I am only not reblogging because I don’t want to spawn several separate threads).  I think mixology is cool, although I don’t know much about it myself, and there’s definitely a lot more there than in pure flavors-of-alcohols alone.  But as a space for connoisseurship, it still feels cramped to me.

It feels like a very small subset of “cooking” involving relatively little preparation beyond the selection of individual ingredients – kind of like making salads, say.  There are a lot of things that can go into a salad, each of which can be relatively low- or high-quality in any given case, and it’s impressive when someone makes a tasty salad with limited ingredients.  But it’s still a really limited domain, and it would seem really weird to be as passionate about “exploring the world of salads” as some people are about exploring the world of mixed drinks.  That’s not to say that those people’s passion is illegitimate, just to say I don’t get it, and elaborate as to why.


@voximperatoris also spoke of the many varieties of alcohol that are out there and their subtle differences.  I think I may just not have the taste buds for this kind of thing?  I can recognize these differences, but to me they’re like the differences between, say, potential salad ingredients (”even just in the realm of squash alone, you’ve got butternut squash, but also acorn squash, delicata squash … “).  Like, the differences are there, but there are differences in everything, and once they get sufficiently trivial (or low-dimensional, or whatever) they stop being interesting.  (In the extreme case, you get Biden sandwich frames.)

(via pluspluspangolin)

Satisfying endings, esp. in serialized fiction

odbqpdbo:

nostalgebraist:

jonomancer:

This is in response to http://nostalgebraist.tumblr.com/post/143321851339/lovestwell-nostalgebraist-the-most-common

@nostalgebraist

Hi. I’ve been thinking about the question of what makes a satisfying ending, because I’m also trying to learn how to tell a good story. So this is party a response to your thread and partly me trying to work this out for myself. The post turned out really long because I don’t have time to make it shorter.

My theory, short version: The ending has to be an answer to the question that the rest of the story was posing.

When an ending is unsatisfying to me there’s usually a sense that the question the author thought needed answering was different from the question I thought needed answering.

Mild spoilers for Homestuck and for Floornight below the cut.
If you haven’t read Floornight I recommend reading it! Despite my criticisms it’s a very original and fascinatingly weird science fiction story.

Keep reading

This is really interesting – for one thing, it makes me aware of some specific ways I could have done the Floornight ending better.

This suggests it’s a good framework to keep in mind generally.  The way these kinds of discussions usually go is that a bunch of people don’t like an ending because it didn’t wrap up some thing or things they specifically cared about.  But on the other hand, “tidy little bow” endings that methodically wrap up every loose end can feel artificial, and get criticized for that reason.  So from the author’s perspective, it feels like: “I’m not supposed to wrap up everything, and anything I don’t wrap up will dissatisfy someone, so there’s no ‘right answer’ and I’ll just strive for ‘not ideal but not terrible.’”

But as you say, rather than throwing up one’s hands like this, it may be better for a serial writer to at least think “what it is that my readers are currently reading to find out?”  (Often, I think, the problem is that the ending was planned from the beginning, but the events right before the ending weren’t; the ending makes sense as the culmination of the entire story, but doesn’t seem to flow from the recent events that ought to have been setting it up.)

This obviously wouldn’t work with serialization, but is there any obvious reason not to write a story backwards from the end?

It’d be an interesting exercise, but it would present extreme planning and continuity challenges, because (barring amnesia and the like) characters in later scenes remember events from earlier scenes.  You’d have to plan each “new” (i.e. earlier) scene so that it produces an already-specified set of effects on each character, without having any other significant effects, which would quickly turn into an unsolvable problem, or one not solvable without continually accepting new continuity/plausibility errors.

You could continually rewrite the later scenes to handle these, but this would be really tough and tedious, and could take an arbitrarily large amount of time and effort, as earlier and earlier events require rewrites of more material, and perhaps more significant rewrites.  Although, if the process eventually terminated, I guess the result might be unusually airtight.  (But would it be more so than a story written forwards and then carefully revised for continuity?)

(via serkentsi-deactivated20180207)

minisoc:

nostalgebraist:

Similarly, Rothbard (1977) rejects the argument that an externality, such as the envy of a third party, vitiates the principle that voluntary exchange increases social utility: 

We cannot, however, deal with hypothetical utilities divorced from concrete action. We may, as praxeologists, deal only with utilities that we can deduce from the concrete behavior of human beings. A person’s ‘envy,’ unembodied in action, becomes pure moonshine from a praxeological point of view…. How he feels about the exchanges made by others cannot be demonstrated unless he commits an invasive act. Even if he publishes a pamphlet denouncing these exchanges, we have no ironclad proof that this is not a joke or a deliberate lie. (p. 18)

(from here)

wait … murray rothbard actually believed that externalities don’t matter because it’s impossible to tell that some third party is negatively impacted by a transaction unless they interfere with it?

incredible

he seems to be saying “you don’t really know how someone feels until they act”, with actions apparently being restricted to financial ones.

Yeah, Rothbard and the Austrians make everything about action, specifically about picking some available actions rather than others.  Rothbard doesn’t think financial actions are the only real actions – I think it’s more that he thinks that you can only tell if someone prefers X to Y when they have the option to take X or Y and take X.  So you can’t say that someone “would prefer that this transaction between others not happen” unless they have the option to stop the transaction, and do so.

Taken literally, of course, this absurdly implies that you can never know what people will think about options they don’t yet have.  “Should we build new houses here?  Will people want to move into them?  Who knows???  They’ve never had that option before!”

My sense of the Austrian economists is that they start out by saying that other economists make overly substantive assumptions, and they instead want to start with really obvious stuff like “people have preferences” and see what they can derive from just that.  Unsurprisingly, when they get a non-trivial result, it tends to come from some non-trivial hidden assumption that has been smuggled into their account of the “obvious” facts.

(via minisoc)