Install Theme

ogingat:

nostalgebraist:

ogingat:

jadagul:

[snip]

[snip]

Okay, something basic about the background of Field’s program is really confusing me.

The motivation seems to come from Quine’s statement, in “On What There Is,” that 

The variables of quantification, “something”, “nothing”, “everything”, range over our whole ontology, whatever it may be; and we are convicted of a particular ontological presupposition if, and only if, the alleged presuppositum has to be reckoned among the entities over which our variables range in order to render one of our affirmations true.

Which makes sense.  If you say “every x has property y” it would be weird if you didn’t think “an x” was a thing.  (Well, it seems weird at first glance, anyway.)

But how on earth is this a problem for science?  The statements of Newtonian physics (say) aren’t assertions about things that are true for (say) all numbers.  They’re about physical variables (lengths and durations), or – if you must – predictions, or whatever.  But if I expect a theory of physics to make assertions to me about numbers, I am looking for something very strange out of it.

To be specific, I’m expecting these theories to ultimately assert things like

“for every distance d (where d could be measured numerically, if you swing that way), it is true that […]”

rather than

“for any real number r, consider a distance of r units; it is true that […]”

That is, if it forces me to believe in anything, it should force me to believe in things like distances, not in numbers.  (What would it be like for a Newtonian physicists to believe in numbers but not distances?  I envision myself sitting forever beside the inert Real Line, possessing a set of physical laws which cannot be applied because there is no space or time or mass.  “Well, this theory is a set of statements about real numbers, ultimately.”  Really?  Could have fooled me!)

The failure of Field’s program, if you think it failed, should be taken to show that “if you swing that way” is not really appropriate here; i.e., there’s not really any other way to swing.

I think you have an overly restrictive view of which statements have ontological commitments - or at least a view that would be heterodox in some circles. Remember my natural history museum example? Say I’m a small child tugging at my mother’s shirtsleeves and whispering excitedly, “Those bones used to belong to a dinosaur in Brazil!” Here are some things to whose existence I seem to have committed:

  • bones
  • dinosaur
  • Brazil

To see why, see how ridiculous these sentences sound:

  • There’s no such thing as bones, and those bones used to belong to a dinosaur in Brazil!
  • There’s no such thing as dinosaurs, and those bones used to belong to a dinosaur in Brazil!
  • There’s no such thing as Brazil, and those bones used to belong to a dinosaur in Brazil!

But, unless a hard-road nominalization process is viable (and I gave some reasons in the previous post that people think it isn’t), you seem to want to put scientists in the position of saying “There’s no such things as functions and real numbers, and the state of the world at time t2 is related to the state of the world at time t1 by [some mechanism using functions and real numbers].” Embarrassingly, I don’t know enough about how scientists would talk about these things to give a good example, and it’s not fleshed out much here.

The statement in the last paragraph doesn’t seem bad to me?  Or, it’s exactly as bad as “there’s no such thing as the number 3, and here are 3 apples.”  Which sounds strange when you put it that way, but there are people – roughly, non-Platonists of various kinds – who would defend the idea that “there is no such thing as a the number 3″ (because numbers are not objects, say) even though we can count things.  The details of the phrasing matter: there certainly is a number 3 in the sense that we mean something when we say “3,” but that isn’t sufficient for there to be “such a thing as” the number 3.

(Admittedly, the statement with the 3 apples isn’t a quantification.  But we could turn it into one, like “there is no such thing as a natural number, but for any natural number n, if I have n apples, then [something],” which sounds awkward at first glance, but substantively doesn’t seem any worse than the 3 apples statement.)

ETA: in other words, it seems like the physicist’s statement is only awkward/ridiculous if Platonism is true.  But if we can just act like Platonism is uncontroversially true, then, well, goodbye Philosophy of Mathematics!

(via ogingat-deactivated20150801)

ogingat:

jadagul:

ogingat:

jadagul:

ogingat:

light-rook:

Most mathematicans I know hold both, and equivocalate between them as convenient. But viewing math and fundamentally Formalist, while utilizing Platonism instrumentally is almost exactly the physical law/metaphysics distinction I was trying to levy at ogingat earlier. (Not that I necessarily believe it as far as physics goes, but I’d be willing to “put money” on Formalism+instrumental Platonism as “the correct interpretation of mathematics”, if that wasn’t a type error.)

The problem is making this consistent - i.e., making sure you haven’t gotten yourself into trouble with your putatively “instrumental” Platonism along the way. After all, Hartry Field’s project failed.

Can you elaborate on this a bit more, please? I was about to comment that I do kind of “use Platonism instrumentally,” by which I mean (and I assume light-rook also meant) that I say things like “There has to be a better argument” and “this is a proof, but it can’t be the right proof” despite the fact that those sentences are pretty crazy under the interpretations of what mathematics means that I believe and profess. Thinking that way seems generative anyway.

I’m actually not sure I can. It may have been bullshit. In retrospect I think the Field program failed for a different reason. I can explain that if you like.

That would be lovely if it’s not too much trouble. (This is an area of my personal philosophy about which I’m a bit confused/angsty. The mental space I use to do my work is one I explictly think is silly. :P )

(Disclaimer: There’s a lot of Mark Colyvan in this post. He seems like a smart guy from everything I’ve read, but other sources may construe the debate and its history in different ways.)

Okay, so. In the 80s, Hartry Field developed much of the fictionalist program in the philosophy of mathematics. This began with Science Without Numbers (partial PDF) in 1980. Provocative title, right? Let’s be clear that the fictionalist program is different from the formalist program, as I noted to light-rook​ somewhere else just now. Formalism makes arguably stronger claims: about whether mathematical statements are meaningful rather than about whether they’re true. (This may remind you of logical positivism!) However, I think it’s prima facie quite plausible that the failure of Field’s effort should affect our view of both stances. 

Field’s idea is that, in principle, we can excise all the mathematics from our physical theories. Field is attacking a common argument, made most famously by Quine and Putnam, to the effect that mathematics is indispensable to empirical science (see here too), and therefore our ontological commitment to mathematical entities piggybacks off our commitments of varying sorts to our best scientific theories. Field thinks that we can remain committed to the truths of these theories - they’re necessary to explain all sorts of phenomena we’ve observed - but state them without any math. If that’s possible, we can think of math as just a tool we use in constructing physical theory, and we can throw away the toolbox once we’ve got the theory and translated the math out of it.

Field’s way to fictionalism is thought of as the “hard road” through “nominalization” (as opposed to the probably-doomed “easy road” for “weasels”). How would this hard road work? Well, Field takes the example of Newtonian gravity, develops it in a vocabulary that doesn’t involve “nominalistically unacceptable mathematical machinery (functions and real numbers)”, then “proves a representation theorem that shows that in the meta-theory one can recover all the relevant numerical claims” (I’m quoting from Colyvan’s account). In other words, we start talking about it without the things we think of as problematically mathematical, then find a way of showing that our new way of talking can do all the things our old way of talking could. Field has a fascinating explanation for why, despite the fact that all mathematics is strictly false, mathematics is still not only unproblematic but useful as a tool for constructing scientific theories: he says it’s “conservative” in a specific sense. I won’t cover that here.

Why did Field’s idea fail? Well, a few reasons. One is that the new statement of Newtonian gravity doesn’t seem to be an improvement on the statement of Newtonian gravity to which math was seemingly indispensable. In fact, Field had to be a “substantivalist” about individual “points” of space-time in order to find a new way of constructing the theory. This means that in getting rid of math, we’ve picked something else up. So we’re no better off; in fact, we’re probably worse, since we gave away something we liked and picked up something we don’t. (I mean this in a fairly strict sense that I don’t feel like explicating. Let’s just say that it has to do with the roles these things might play in other theories.) Another is that it’s not clear that Field’s nominalization can work with more contemporary theories, especially quantum-mechanical ones. You will be better equipped to understand that than debate than I am (it has something to do with “the central role infinite-dimensional Hilbert spaces play in the theory”).

Personally I think Field has to be commended for his boldness. The idea of reformulating the entirety of empirical science without math seems ludicrous, but on hearing the indispensability argument, Field gave it a shot. I don’t think it’s plausible that any effort like that could succeed, but you might have a different take.

Okay, something basic about the background of Field’s program is really confusing me.

The motivation seems to come from Quine’s statement, in “On What There Is,” that 

The variables of quantification, “something”, “nothing”, “everything”, range over our whole ontology, whatever it may be; and we are convicted of a particular ontological presupposition if, and only if, the alleged presuppositum has to be reckoned among the entities over which our variables range in order to render one of our affirmations true.

Which makes sense.  If you say “every x has property y” it would be weird if you didn’t think “an x” was a thing.  (Well, it seems weird at first glance, anyway.)

But how on earth is this a problem for science?  The statements of Newtonian physics (say) aren’t assertions about things that are true for (say) all numbers.  They’re about physical variables (lengths and durations), or – if you must – predictions, or whatever.  But if I expect a theory of physics to make assertions to me about numbers, I am looking for something very strange out of it.

To be specific, I’m expecting these theories to ultimately assert things like

“for every distance d (where d could be measured numerically, if you swing that way), it is true that […]”

rather than

“for any real number r, consider a distance of r units; it is true that […]”

That is, if it forces me to believe in anything, it should force me to believe in things like distances, not in numbers.  (What would it be like for a Newtonian physicists to believe in numbers but not distances?  I envision myself sitting forever beside the inert Real Line, possessing a set of physical laws which cannot be applied because there is no space or time or mass.  “Well, this theory is a set of statements about real numbers, ultimately.”  Really?  Could have fooled me!)

(via ogingat-deactivated20150801)

ogingat:

anostalgebraist:

ogingat:

nostalgebraist:

ogingat:

nostalgebraist:

Keep reading

I’m afraid I’m still not really finding this comprehensible. Maybe I will come back later and try again. As far as I can tell you may be confusing identity claims with supervenience claims, which of course makes it look like multiple realizability is a problem (i.e., to me this looks isomorphic to the classic example of supervenience from philosophy of mind). It not completely clear to me why you are trying to perform reductions in the first place. And giving examples of places where talk of existence is arguably not helpful does not justify tabooing all talk of existence. For example, you would need a further argument to show that another of your examples, whether counterfactuals exist (you mean whether possibilia exist, right? counterfactuals are not the sorts of things that can exist), has similar problems. In particular the “multiple equivalent competing theories” idea does not seem to hold water with modality. (It might not hold with anything else, either.)

It is possible you simply reject the idea that talking about something implies that it exists. I guess this isn’t completely new.

Yeah, feel free to come back to this later.  I am being quick and persistent with responses because I’m sleep-deprived and thinking about abstract things when I’m sleep-deprived helps me keep my mood from going downhill.

I think what I’m trying to say here is that ontological questions like “which entities exist?” may be more or less appropriate for any given world, and for any given level of reduction.  This sort of “here’s a thing, here’s another thing” language is appropriate for, say, talking about you looking at the bones, but it’s not as appropriate for the rules of Go or the rules of quantum field theory, which are systems that don’t naturally carve out roles for particular objects.

So when we come in and try to describe something, we have to look at how it works first, and then decide whether the ontological way of speaking is appropriate.

This may be a difference in our personal senses of what the word “existence” means.  For me it is tied up in objects and my objections deal with systems that don’t break down naturally into objects.  I do think you can talk about being without talking about objects: there are facts about being in Go world, but there aren’t objects (at least not on the most fundamental level).

So I’m still not sure about most of this but I want to engage you where you’re strong because it’s where I’ll be tested the most and will have the most opportunity to learn. Plus, I want to keep the conversation going to keep your mood from going downhill.

Thank you!

Keep reading

So I’m not going to respond at too much length but I wanted to note that normally when philosophers talk about theoretical virtues we are including both low ontology (reducing kinds of entities) and what Quine calls low ideology (reducing primitives or laws). These are generally taken to be of equivalent value - if one theory has a basic kind where another has a primitive relation, neither has the advantage (and this is very often how it works out - often enough, in fact, that it makes us wonder if metaphysics is pathologically in that situation). But note that reducing the number of entities is not the same as reducing the number of kinds. This is why Lewis is able to make an argument for modal realism actually based on parsimony. So Yudkowsky’s example is simply not on point here; it’s not comparing ideology to ontology, but to something much less pivotal than ontology.

Yeah, it occurred to me right after writing that post that I was playing fast and loose with “number of entities” vs. “number of kinds.”  In fact, pretty much all of my examples are the former.  I’m not sure this is totally uninteresting philosophically, though – see the SEP section on “quantitative parsimony.”

I guess I would say that “number of kinds” is pretty slippery in mathematical subjects – it is not trivial to decide how many kinds a theory contains.

This is because a number of qualitatively different behaviors can result from the same simple mathematical rule, or set of rules.  A simple and trivial example is a string fixed at both ends (like a guitar string) – all of its possible motions are described by the same simple equation, but there’s a countable infinity of such motions, one fundamental vibration and infinitely many higher harmonics (see the image here).  Do we have infinitely many kinds (types of motion), or just one (a string)?  Well, it depends on how you want to parse things out ontologically.

Now this may sound kind of silly and irrelevant.  But the situation just described is very similar (according to my pop science-level understanding) to how string theory accounts for the fundamental particles.  In string theory, the strings can vibrate in an infinite number of ways, just like a guitar string with its harmonics.  Each of these corresponds to a kind of elementary particle.  So if we are thinking of elementary particles as kinds – which is naively natural – then string theory has infinitely many kinds, where the Standard Model only has 61.

Is this a bad thing for string theory?  The conventional thinking is that it’s the opposite: string theory purports to explain every single observed particle as a particular case of a single kind (a string).

More generally, we typically find that most physical theories in some sense have just one kind, because they consist of equations with solutions, and the solutions are all the same sort of object.  If you have an equation whose solutions are functions, then the solutions are all “one sort of thing.”  If your equation only allows for (say) very smooth functions, this does not make it “simpler” than an equation that allows for very smooth functions and for very jagged ones.

Often what look like the “kinds” in a physical theory are really like “smooth functions” vs. “jagged functions” – you didn’t have to postulate anything extra to get both, they just came “for free,” like the harmonics of the guitar string.  (Indeed, often you have to add more postulates to get fewer kinds – in fluid mechanics, say, if you are not interested in certain sorts of waves you have to make extra assumptions which rule them out.)

I don’t really know where I’m going with this – I guess my point is that when you think about physical theories a lot, you start to get cautious about ontological interpretations, because the terms of those interpretations don’t feel like the natural ones.  (How many kinds does my theory of the guitar string contain?  Weird question!)

(via ogingat-deactivated20150801)

Also, since I sniped at Moldbug’s climate change denialism earlier, I feel like I should say something more direct about that issue, especially since I do scientific work tangentially related to climate science (though far away from the sort of stuff that tends to be controversial).

(Attention conservation notice: long, rambling post, heavy on vague judgments and light on specific facts.)

Basically, I have the most boring possible view of the climate change controversy: there are more and less sophisticated versions of both the “pro” and “anti” views, there is something to be said for the sophisticated versions of both, it is an issue that suffers when it is politicized but it’s hard to motivate policy on the basis of de-politicized versions because they sound boring, etc.

1.


“Unsophisticated denialism” consists of a bunch of irremediably bad, dead-in-the-water arguments like “if weather prediction for next week isn’t very good, wouldn’t forecasts decades in advance be even worse?” (I wrote about why this is wrong here), and various versions of “some recent year was fairly cold (or whatever) so climate change has stopped” (at short timescales there is variability and no one disagrees about this; you need to look at longer, averaged trends).

In particular, almost everyone with any clue what they’re talking about acknowledges that climate change is 1) happening.  The smarter “denialist” blogs all acknowledge this.  If someone denies this, they’re probably clueless.  Many “denialists” even agree that climate change is 2) anthropogenic, though there are some non-crackpots who call this into question (e.g. Richard Lindzen, notorious for being an in-field expert who is also a denialist).

2.


“Sophisticated denialism” tends to consist of one or another kind of skepticism about the general endeavour of climate prediction – not because “it’s impossible in principle” (which is what the unsophisticated denialists think), but because it is a very complex problem, current climate models are inaccurate in some important ways (everyone agrees about this), and it’s worth asking how much money and effort we should pour into responses to the output of such imperfect tools.  Some people worth listening to have raised these concerns (e.g. Lindzen – a domain expert – and Freeman Dyson, not a domain expert but an extremely smart polymath who has worked in related fields).

3.


As I see it, “sophisticated denialism” really isn’t very far from “sophisticated credulism.”  The sophisticated denialists raise serious scientific questions about inadequacies in computer models; meanwhile, the field of climate science – which one could say is populated by “sophisticated credulists” – is racing to improve the models as much as possible.

This is a very important point, and is the main thing I want you to take away from this post.  A staple of denialist writing, both sophisticated and un-, is that climate science is made up of lazy thinkers who trust their models uncritically and do not reflect on the distance between model and reality.  In my experience in the field, nothing could be further from the truth.  Climate modelers – in their natural territory, away from the press, talking amongst themselves – are intensely skeptical, critical scientists, constantly asking “but what about this?  but what if this goes wrong?  where does this technique fail?  how could we make this better?  are there data that cast doubt on this hypothesis?”  They are working on an extremely complex problem, but they don’t shy away from that complexity; the field is a riotous buzz of arguments, takedowns, new clever hacks, and observations about inadequacies and how they might be fixed.  If you don’t like people who are too credulous of computer models, don’t get mad at the average working climate scientist; they are on your side.  (And they are working on the problem.)

This means that the whole branch of denialism focused around trying to show that scientists are engaged in “dirty tricks” – like the ridiculous “Climategate” – can be ignored.  I’m not saying that there is never dishonesty in the field, because you can’t rule that out in any field.  But your average climate scientist is nothing like a lawyer trying to make the case come out “right.”  I am not aware of anyone who portrays the field in this light who has actually spent time within it, which is telling.

4.


What about the fourth corner of my 2x2 square – “unsophisticated credulists”?  They exist.  Scientists have realized (correctly) that there is a case to be made for taking political action to avoid dangerous climate change, and a message full of caveats and admissions of uncertainty isn’t very rhetorically effective.  So you get all of this insistent certainty: “climate change will be horrible if we don’t do something and this is what the whole scientific community thinks and anyone who disagrees is a crackpot.”  This isn’t quite right, because there’s a lot more uncertainty than this message lets on, but I can understand why people frame things this way.  Politics doesn’t deal well with uncertainties, especially when you’re trying to get people to do something, well, inconvenient.  Any caveat will be taken as an escape clause that lets people maintain the status quo.

5.


An aspect of the situation that is especially frustrating – and interesting – is that some of our uncertainty may be impossible to remove even in principle.  One of my favorite papers ever is “Why is Climate Sensitivity So Unpredictable?” (PDF, see also these slides), which shows in just a few pages that the probability distributions of possible climate sensitivities have skewed tails because of simple facts about how feedback works.  In other words (interpreting a bit), no matter how much we improve the models, there’ll always be some nontrivial probability attached to unlikely, extreme climate change events. 

This is hard to pack into a political soundbyte, but it’s the kind of idea that should appeal to people who like thinking about expected utility: we will never have good enough models to avoid having a probability distribution with a long tail extending into extreme cases.  What we have is not a certain prophecy of doom, but a probabilistic prediction that puts some mass on extreme disasters, and expected utility maximizers should respond accordingly.

(One more thing that needs to be mentioned here is that, as the authors of that paper note, the right way to respond to this uncertainty may be to adapt our response as the data comes in.  High climate sensitivities mean big climate changes, but those take a long time to develop; if we wait long enough, collect new data, and become more confident that things are going in that direction, we can adjust our behavior accordingly.)

6.


There’s a final thing I feel I need to talk about here, since it was a core point in that Moldbug post.  Moldbug says that climate prediction isn’t really science because it isn’t testable.  At this point we have no way of telling a correct prediction about 2050 from an incorrect one.  We do test our climate models by asking them to “predict” the past and seeing if they get it right.  But they’re never going to get it exactly right – chaotic system and all – only right in some statistical sense, and thus we have to make a judgment call about how good a match counts as “right,” and this criterion is open to the charge that it is too lax.  Whatever this argument is, it is different from the sorts of things we think of as “classic” physical science, where you can make a zillion very precise predictions and test them all in the lab.

This is all true as far as it goes, and it’s a reasonable criticism of “unsophisticated credulists.”  I do kind of cringe when people talking about climate change get high and mighty about “the science,” with an undertone that these results are as well established as our prototype cases of physical science.  They aren’t.  Climate change denial is not absurd in the way that “quantum mechanics denial” would be.

But, that being said, we are doing the best we can.  If we could do real experiments in the lab about climate-in-2050, we would.  But we can’t, and so we have to use the best information we have, and make decisions on the basis of it.  It is extremely rare for important social or political decisions to have an unambiguous answer that is a direct consequence of physical science.  At least with climate change, we have models that are based on well-established physics, and which can produce pretty damn good quantitative results in the cases where they can be tested.  Can you say that for any of the other political issues you care about?  It sure would be nice if, say, the Federal Reserve had economic models as good as climate GCMs!

In fact, I think that is probably the best comparison: you should be no more skeptical about climate science than you are about macroeconomics, and probably less so.  Macroeconomics, from what I can tell as a non-expert, is way more full of confusion than climate science is.  The experts can hardly even agree about what, if anything, the field has concluded.  There is no consensus on whether the newer models are even better in any way than the older ones (DSGEs vs. earlier models, say).

But the government has to make some economic choices (even “doing nothing” is one such choice).  We could throw up our hands and say “The Science has not yet spoken on macroeconomics, so we can know nothing,” but – then what?  Monetary and fiscal policy decisions are made, and we may have opinions about them, but very few of us are “denialists” who say that these entire enterprises are pointless because they are not Sciencey enough.  If you are not a macroeconomics denialist, you should not be a climate change denialist, either.  You can be sophisticated and say that we are uncertain about many things, but when aren’t we?  And yet we still take action in the world.

I said I was gonna stay off tumblr, but I felt like writing a post about zombies and I figure it can’t hurt if I don’t look at the dash otherwise.

None of the following is new, I’m sure, but I just want to mention it because I think it might be a better framework than talking about “epiphenoma” and the like

What I want to say is something like: maybe it’s just inherently misguided to imagine a world where people talk the way we do about consciousness, but in which there is actually “no one home,” because “there being someone home” is simply what it is like to be a physical system which talks about consciousness in the way we do.

The purely physical picture has a nice coherence to it.  It sure looks like all of the stuff we say and do could, in principle, be explained as the result of the way our brains and bodies are set up.  One example: when we talk about the “ineffability” or “indescribability” of consciousness, we tend to hone in on certain experiences in particular, usually “primitive” sensory experiences that can’t be decomposed into smaller parts.  “Seeing a color” is the classic example – Mary the Color Scientist and so forth.  I have this feeling that there is a very definite character to each of the colors I see – I recognize them instantly, as though they were friends.  I have this recognition experience like “oh! there’s red!” where “red” is this definite, clearly defined thing.  Then I see green and think “oh! there’s green! you’re very different from red, and I know exactly how!”

But these “definite personalities” and “differences” can’t be described.  If I wanted to say how I tell one friend from another, I might describe differences in their facial shapes.  If I want to say how I can tell Bob from John, I might describe a difference in their facial shapes.  If I want to say how I tell red from green, I’m at a loss for words.  (I can refer to things that are red, like saying “red is the color of spilled blood,” but that’s cheating – it’s like saying “I can recognize Bob because looks like the guy who did [a thing Bob has done].”)

If you are thinking in purely physicalist terms, it’s easy to guess why this might be true.  We’re used to being able to describe our perceptions by breaking them down somehow.  I look at my floor, see that it has a pattern of squares on it, and say “pattern of squares” – a concept that can be spelled out mathematically.  But at a certain point, we get down to a level where the input we’re getting has no structure – it isn’t a pattern of units, it’s just a unit.  We can’t recognize any internal structure inside “red.”  It’s just the “you’re seeing red” fibers lighting up, or whatever.  So it makes sense that this particular sort of experience would seem weird when we really think about it; that we’d point to it as an instance of how conscious perception is different from all other sorts of knowledge; that we’d talk about how Mary may know everything about red scientifically but will learn something new when her red fibers light up (or whatever).

The problem with this cheerful physicalist picture is, of course, that it seems to allow a world where our qualia “aren’t really there” – it explains why we talk about them without actually including them.  This is about as absurd as you can get, because I’m as sure of my conscious experience as I am of anything.

But, as youzicha said a little while ago, sometimes the best way to fool someone into believing something is to make it true.  In this case: maybe “phenomenal experience” arises precisely in those cases when a system is set up to talk about this weird, ineffable thing called “phenomenal experience,” in precisely the way we do talk about it.  A rule of thumb might be: if you can build a mind that talks in this way, and does it as an emergent consequence of its overall structure rather than as some ad hoc additional “script,” then it’s probably conscious.

This might not be as absurd as it sounds.  Compare to the case of free will.  People seem more willing to bite the physicalist bullet on free will than on consciousness, probably because there can’t be “epiphenomenal” version of free will (if your “free will” has no effects, it isn’t really free will).  People say: maybe everything really is deterministic, but we still experience “free” deliberation over actions and so forth, and all of that is real, and there’s no reason we should change the way we do that just because of some bottom-level fact about physics.  This seems similar to what is described above: there’s a self-consistent physical picture where a system sees itself as “having free will” because of how it’s built, but the underlying determinism doesn’t mean it “doesn’t really have free will” – it’s just that a system that deliberates (and so forth) in a sophisticated enough way to “think it has free will” is precisely what we really mean by “something that has free will.”  We could say: it’s impossible to be deluded about having free will, because if have the kind of mind that lends itself to belief in free will the way ours does, you’re as free as anything can be.  Similarly, we could say: it’s impossible to be deluded about whether you are conscious, because if a physical system is inclined toward our kind of consciousness talk – “whoa, I perceive red in this, like, ineffable way, and I could imagine a being that acts the same way but doesn’t” – that means it’s conscious.

In other words: consciousness is what “being a physical system that can imagine a zombie version of itself” feels like from the inside.

slatestarscratchpad:

nostalgebraist:

As I said in an earlier post, I think the rationalists have a tendency to get stuck in rabbit holes as a consequence of the fact that people not believing in something won’t shut off the community spiral around that thing.

It’s kind of a weird side effect of “I take all ideas seriously,” where people file away certain ideas as worthy of taking seriously even if they don’t believe them, while not treating every idea in the same way (because that would be impossible).

In practice, there are certain ideas that really do cause you to take people less seriously.  This is what I am trying to get across (perhaps badly) with the flat earth examples – there are some things that you not only disbelieve but do not consider worth time worrying over, and in fact you judge people for believing them.  You can’t not do this, unless you have some sort of strange view where someone reliably coming to incorrect conclusions contains no evidence about the quality of their thinking.

Since you can’t really take every idea seriously, certain ideas become “protected,” so that no matter how little credence you give to them, you pretend they don’t convey any evidence about their believers.  Friendly AI was my example of such an idea in an earlier post: the practice among the rationalists is to treat it as a protected position even if they think it’s utterly absurd, while they (or most of them) would never give the same credence to various religious or political positions which have many more intelligent adherents than FAI.

As I said in the earlier post, it’s easy for me to imagine a world in which some blogger with the right Grey Tribe credentials had gotten convinced that there really was something in phrenology after all, and scores of mindless contrarians had latched onto this, and it had migrated into the rationalist world, and “phrenology” had become a protected position.  Although many people would nonetheless think there was nothing at all in phrenology, they would assure you that they did so in an informed way, tempered by the arguments of prominent internet phrenologists.  And arguments about phrenology would pop up again and again, often just for the sake of argument, since few would actually believe in it – it would just be one of those topics, like radical Bayesianism, or FAI, or Mencius Moldbug.

(A fondness for these things can be tolerated, although of course you don’t believe in them; a fondness for those without the right Grey Tribe credentials just indicates mindless conformity.  Of course.)

(And here is where everyone says “you’re saying things I like are as silly as phrenology!”  No, I’m trying to point out that it is possible to spiral around things in a way that has nothing to do with their truth value; the fact that you find phrenology silly is what makes it a good example of this.  Can you really not imagine that a few worlds over there is a rationalist community in which phrenology is a protected position even though almost everyone thinks it’s wrong?  Where tolerance of phrenologist views is used to signal one’s intellectual openness?  Where you aren’t a phrenologist, but some of your best friends are, and that’s OK, and fuck anyone who says otherwise?  But if this is OK, why isn’t it true here and now?)

I think what you’re calling “treat as a protected position” is what I call “don’t immediately dismiss a position without thinking about it, then spend the rest of your time mocking people who believe in it.”

I wrote in Cowpox Of Doubt that:

The more we talk about homeopathy, and moon hoaxes, and creationism – the more people who have never felt any temptation towards these beliefs go through the motions of “debunk”-ing them a hundred times to one another for fun – the more we are driving home the message that these are a representative sample of the kinds of problems we face.

Phrenology is obviously being used the same way here as homeopathy in my Cowpox example. As a stand-in for “Correct beliefs are always obvious, therefore if I hear anything I disagree with it can be dismissed without thought and its proponents mocked soundly.”

If you demand we dismiss a certain class of beliefs as equivalent to phrenology, and therefore too stupid to deserve serious analysis then you’re going to have to back that up with a good heuristic for determining that a belief is in this class which is quicker than studying it and debating it honestly.

The only heuristic I’ve ever learned that even approaches this level is “unpopular beliefs are stupid and should be dismissed without thought”. This is a good heuristic for most people, but if you force everyone to adopt it, then progress halts, since all popular beliefs have to start as unpopular beliefs.

My own experience has always been that all attempts to come up with better heuristics end in flaming disaster. For example, we know from studies (one of which I replicated myself with the LW survey data) that the ratio of the lengths of your fingers is correlated with how feminist you are. How in the world are we supposed to know that it’s OBVIOUSLY idiotic to try to infer your personality characteristics by bumps on your head, but it’s correct to try to infer your political beliefs by the length of your fingers?

The only reason phrenology is supposed to be so obviously idiotic that it’s a reductio ad absurdum for anything, is that it was once popular and is now discredited. If you want to use that as a good analogy for neoreaction, fine. But Friendly AI doesn’t follow that pattern. “Radical Bayesianism,” as if that’s a thing, doesn’t follow that pattern. And sometimes that pattern is wrong - cognitive psychology was considered discredited during the Behaviorist years, but then everyone admitted that actually Behaviorism had been wrong and the cognitive perspective was right all along.

If very many smart people whom I otherwise trusted started saying phrenology was correct, and they were able to point to studies and experts who supported them, then I would take it seriously as something worth thinking about (even if there were other studies and experts who didn’t). I’m not sure how else you expect people to think, unless you want everyone to stick to their first prejudice and never change.

Again, I think talking about “protected positions” is a serious misinterpretation of the issue. The reason that rationalists tolerate discussion of Friendly AI, and not discussion of phrenology, is that nobody tries to argue for phrenology, and if they did they wouldn’t be able to make a good case for it. If they made a great case for it and had a bunch of studies supporting it and some people whom we really trust like SarahC and gwern said they’d looked into it and found it extremely convincing, we’d talk about that too. The fact that this hasn’t happened probably says a lot more about phrenology than it does about the norms of the rationalist community.

Also, it’s not like we’re patronizingly “tolerating” Friendly AI. I think Eliezer’s mostly correct about that. So do probably at least half of other rationalists. How are we supposed to reject a position we actually believe?

It seems like you’re pushing back against “if it seems obviously wrong, it’s wrong,” where I’m trying to push back against “if it’s wrong, that doesn’t mean we shouldn’t stop talking about it endlessly.”  These are two completely distinct issues.

In the latter phrase, I wrote “wrong” rather than “obviously wrong.”  This is not because I think it’s possible to have perfect personal access to whether an idea is right or wrong, but because what I’m saying is not about quick heuristics, it’s about considered positions.  My question is, if you are pretty damn sure something is wrong (to the standard “about as sure as you are of anything in the relevant category” – cf. my earlier statement that if you have to reject any one ideology, fascism is a good choice), and people around you keep talking about it, what should you do?  What if they keep arguing about it but the arguments don’t seem to change anyone’s mind?

What if a large proportion of a given community considers the idea respectable, almost no one outside that community does, you think it’s almost certainly wrong, a lot of people in the community actually agree with you, but none of this will stop the endless debate cycle about this one idea?  At what point do you start to worry that the community is making some kind of error?

I’m finding that it’s very difficult to make analogies about this issue that will be interpreted the way I want them to be.  One desirable quality for such an analogy is that it selects some idea that most people reading it will have a considered objection to.  But that means choosing an idea that is widely disapproved of, like homeopathy, and those ideas now scan as “obviously wrong,” so when I mention them, people assume I’m talking about quick heuristics and not considered positions.

It’s possible there is a bigger problem here – I am implying that there is a category between “obviously wrong” and “respectable (deserving of what I called ‘protection’ earlier),” and maybe you don’t think there is.  I do, though, because the space of possible ideas is vast, and the space of possible ideas for which a good argument has been made is much smaller.  There are ideas that are just boringly wrong, that is, ideas which don’t ping the “but that’s absurd!” sensor in one’s mind, but which nonetheless the evidence is against.  (Most discredited academic ideas are like this – someone thought they were worth looking into, someone looked into them, the end.  The exception is the ones many people have heard about, like phrenology or the plum pudding model, which now ping us as “obviously wrong” for that purely historical reason.)

My frustration here is with feeling unable to say “no, you are just boringly wrong, in addition to being wrong according to the quick heuristic.”  Friendly AI, the digit ratio thing, and a lot of Moldbug’s ideas sound like crackpot stuff to many people; they might also actually be wrong, for boring empirical reasons.  But ideas that have become protected do not go away no matter how much evidence stacks up against them; people solemnly nod their heads in response to the latest bit of evidence, and then go on respecting the position in principle.

This happens in other communities with other protected ideas – including those that might strike you or I as “obviously wrong” – which is part of why I think the “protected position” idea has validity.  When I was younger I spent a lot of time around alternative medicine people because my father is into alternative medicine, and a lot of these people (including my father) are not stupid.  I was even prescribed homeopathic remedies, which my father encouraged me to keep an open mind about.  These people don’t just ignore negative evidence or physical plausibility; what they do is to point to the vast uncertainty of medical knowledge, to the distorting effects of vested interests, to the occasional study that contradicts the trend, to complicated treatises by their favorite mavericks which you will never have time to read, to statistical nuances, and so forth.  And yet the feeling remains: they are always performing these gestures in favor of the positions they happen to like.  The gestures always go in the direction of “keep [homeopathy / biofeedback / vitamin megadoses / this month’s preferred obscure supplement] respectable,” and never in the direction of any of the other myriad ideas in idea-space.  It feels very similar to talking to LWers about FAI or the like: you can make all the arguments you like, but the fixation on seeing the position as worthy-of-interest is never going to go away.

(via slatestarscratchpad)

The Sequences and Our Community

queenshulamit:

nostalgebraist:

dataandphilosophy:

If you’re following this in real time, you’ve probably seen “Maybe the real rationalism was the friends we made along the way.” This is going to be completely incomprehensible to someone not reading in real time, probably, so I’ll just write a “The Conclusions” post for it all later.

Thesis: Even though we don’t explicitly reference them often, Tumblr LW is still informed by and draws on the material on LW, as defined here and here. These are the ones the community accepts as the big important sequences for people to read. 

Rationality Materials

The most obvious comes from Kelsey’s blog title: The Unit Of Caring, which is the name of a Yudkowsky post,

Money: The Unit of Caring

.  Other Sequences is ~ Sequences by Others, and I detail below how that is mostly our community. The Core Sequences listed I discuss below, as they are sequences. 

The Standalone posts that are recommended talk about the importance of signalling, scholarship, dist specks, predictions and calibration (which we do a little but probably less than average of), akrasia (which we all fret about), charity (which we frequently discuss), statistics (see my blog name), AI, and decision theory (these two I will grant as being distinct.)

The Sequences

“The most important technique that Less Wrong can offer you is How To Actually Change Your Mind.” We post about changing our mind, argue with each other to change our minds, introspect a lot to learn what we should change our minds about, my impression of Tumblr LW is that this is an important thing to us. We talk about the dangers of politics more than I would prefer, we talk about cached thoughts, reversed stupidity, the genetic fallacy, and motivated reasoning. Overly convenient excuses are implicitly swept away.

We don’t explicitly talk about the 37 ways words can be wrong. But it’s just so obvious to me that some people have it in the back of their heads, always poking at definitions and wondering what they mean, what they imply, what the connotations are, and trying to get the right language for things. I think that this is a common feature of this community that we don’t see as frequently outside. I think that Mysterious Answers to Mysterious Questions is similar. We’re used to tabooing language and coming up with alternate ways to explain our concepts. Reductionism, again, we have breakdowns of concepts and internal mental states here that I just don’t see elsewhere. 

Of the 11 sequences mentioned as Sequences by Others, four are by Scott (Yvain), one is by Alicorn, one is unfinished, and one is about intelligence tests and what they miss. So half the posts are either things we’ve thought about carefully or are written by community members.

In conclusion, using what seems to me to be a good measure of relatively consensus LW opinion, Tumblr LW is a reasonably close fit. We are frequently familiar with the core sequences (and whatever you hear or think about the rest, 37 ways words can be wrong is astoundingly useful for everything and particularly philosophy and politics), and if we don’t contribute many useful insights back into the main group (that I have heard of), we’re still no worse than critical theorists in IR, a fitting thing for a group on tumblr. 

I think a variable that needs to be included in this analysis – although a hard one to assess – is how widespread a given idea is outside of people who have read the Sequences.

Here’s a silly thought experiment showing why this is important: imagine a world in which the Sequences were the same as they are now, except with the addition of some quantity of “trivially true” posts or sequences expressing or explaining ideas held by almost everyone.  These might comprise anything from lists of true facts (”Tanzania is in Africa”) to correct, unexceptional expositions of things like arithmetic or high school chemistry.  (We could even imagine a great number of “trivially true” posts dedicated to more relevant topics, like a sequence covering the material you’d see in an introductory logic textbook, an uncontentious summary of the views held by various famous philosophers, or a explanation of measure theory and mathematical probability.)

We could imagine dialing up the number of such additional posts so that, say, 50% of the Sequences were trivially true, or 90%, or 99%, or whatever.

Now, one could do the sort of thing you are doing, except one would get false confirmations like “tumblr LW people sometimes use arithmetic.”  For suitably many trivially true posts, the number of these pointless comparisons might dwarf the number of meaningful comparisons.

Of course you know all this; when you say things like “We post about changing our mind, argue with each other to change our minds, introspect a lot to learn what we should change our minds about,” you mean that LW tumblr does this more than average people do.  Which is probably true.  But if you look at the set of people who do these things, what fraction of them are associated with LW?

If the Sequences included a “trivially true” sequence explaining measure theory, then rationalists would be more likely to know measure theory than the average person, because most people don’t know measure theory.  However, the number of people who know measure theory (mostly from college classes) would still dwarf the number of rationalists.  And saying “you know measure theory?  You should call yourself a rationalist!” would seem very strange.

One of the reasons I don’t call myself a rationalist is that all of the ideas in the Sequences which I agree with seem to me to be widely held by people who have never heard of LW (if perhaps not widely held enough), while all of the ideas distinctive to the Sequences are ones I disagree with.  If you are to judge a rationalist by whether they broadly agree with the Sequences, then I guess I’m a rationalist – but that feels a bit to me like asking whether I “broadly agree” with a list of 999 trivially true statements like “Tanzania is in Africa,” followed by the statement “God is a 22-headed serpent who lives on the moon.”

The reason I am being so much of a stick-in-the-mud about this is that what the general public hears when they read about rationality is the distinctive stuff, not the widely believed stuff.  See for instance this Harper’s article (PDF link).  It’s an annoying article written in sort of Encyclopedia Dramatica / Portal of Evil mindset, clearly trying to make its subjects sound as silly as possible.  But there is a reason it focuses on outlandish claims and practices: they’re the interesting ones!  An article about how Eliezer Yudkowsky thinks we should all reflect more on the nuances of language would put the reader to sleep, or else make them think “I agree with him, why does this guy get this kind of coverage for saying something so banal and obvious?”

The kind of stuff you are talking about is not what the average person thinks of when they hear the word “rationalist,” if they think of anything at all.  From the outside, it’s a confusing label.  If there were people who were fans of the 1000-proposition list I described earlier, we would not view them as “nice sensible people who believe that Tanzania is in Africa and so forth.”

I feel like the Sequences are not Shocking Incredible Insights, but they’re also not “Tanzania is in Africa” level obvious?
Compare LWers with Christians. You have trivial stuff like “murder is bad.” You have weird (and I’m using weird in a non-pejorative sense here) stuff like “God became a human being and died and then rose from the dead” or “bad people will be literally tortured for eternity; good people will literally experience eternal happiness. Then you have stuff which is less weird than human!God, but less obvious than murder, like “choosing to remain rich instead of giving your money to the poor is morally wrong” or “always forgive people, whether or not they ‘deserve’ it” or “abstain from sex outside of marriage.” Some of the Sequences is weird shit like the death and resurrection of Christ (a lot of Fun Theory), some of it is like “murder is bad” but most of it is between the two (the metaethics Sequence, the stuff about making beliefs pay rent, explanations of heuristics and biases) is like the Christian stuff about giving most/all your money to the poor* and always forgiving people - unusual enough to be revelatory for people not already exposed to those ideas but not unique.

*LW is actually the same as Christianity on this, except LW prefers most to all and is generally more sensible about things, at least in my personal experience.

So it’s not quite motte-and-bailey, it’s motte-and-bailey-and-some-third-thing-between-the-two.

Also the Human’s Guide To Words isn’t just “thinking about nuances of language” it’s a very, very useful tool written unusually accessibly to allow people to do so. I won’t pretend it’s unique but I will say that it’s very helpful and that it brought this stuff to probably a large contingent of people who would not have found those useful tools in other places.

I found a lot of useful things through LW which exist elsewhere (LW did not invent utilitarianism, I am pretty sure I would eventually have heard of beeminder** and habitrpg from somewhere other than LWers talking about akrasia, I’d already left Christianity and my theism was hanging by a thread when I read How To Actually Change Your Mind and I’m pretty certain I would have eventually become an atheist anyway, I’m sure there are similar framings of Dissolve The Question out there although I haven’t come across them. I would possibly have done research into charities and found the GWWC pledge exists at some point without LW.) But this is where I found them. And the people are nice. And having a specific community dedicated to becoming more rational is a helpful thing to have, and making a community of “everyone in the world who wants to be more rational” is not going to work. (I mean, LW is fragmented by necessity already.)

**ETA: beeminder was built by LWers so that’s a specific thing we owe to LW and the akrasia talk.

Similarly I will grudgingly acknowledge that there are people who found “try to be nice to people” through Christianity, and although “try to be nice to people” is not uniquely Christian, some people got exposed to it because of Christianity. I would personally find it deeply annoying if Christianity referred to itself as the Trying To Be Nice To People Community, but only mildly annoying if Christianity referred to itself as One Of Many Trying To Be Nice To People Communities With Some Added Weird Beliefs About Human Gods And Hating Sex and Money and Food and Fun* Being  Extremely Cautious About Problems Caused By Sexual And Material Greed (and that annoyance would mainly come from my own irrational issues with Christianity.) I don’t think LessWrong is a uniquely rational community, I think it’s a specific community for trying to be more rational which is suited to a specific subset of people who are trying to be more rational.

*NB: I know this is a strawman I just have a Tragic Backstory Involving Jesus

I mean, I think you’re completely right about how it’s not just obvious vs. outlandish.  There is some stuff in the middle that is nontrivial and important and not widely available, or widely available but not usually phrased in a way that will work for certain people.

(In particular, the stuff about being actually open to changing your mind is really startlingly rare, even in academic communities.  Perhaps even more important, a lot of people compartmentalize and only apply some of these ideas to certain parts of their lives, which I think LW is somewhat unique in opposing.)

But if there are important, nontrivial ideas there, that makes it all the more important from them to be disentangled from the really outlandish ones and disseminated more widely.

My psychological motivation for making points like this is very close to "it’s like if Christianity referred to itself as the Trying To Be Nice To People Community.”  Except with the addition that there are a bunch of people who call themselves “Christians” but don’t believe in God, or in any Christian doctrine besides some of the moral values, but still call themselves Christian because of the shared bank of cultural references and shared interest in Trying To Be Nice To People.

If I thought these values were good, then I would want them to spread, but for that reason I would wish that the non-believing “Christians” would appreciate that lots of other people share their values, or would get on board with their values if it weren’t for the associations of “Christianity” with God and hell and so forth.  Yes, non-Christians aren’t as likely to catch your references to Second Thessalonians, but that really isn’t the most important thing here.

And meanwhile this world would just be confusing: I’d get people eagerly recommending “Christian” blogs to me for their supposedly unique commitment to Trying To Be Nice To People, and I’d read these blogs, and they’d be pretty good, but not really much different from secular blogs, and mainly distinct in that they make more Biblical references (without believing in God or anything).  And I’d hear people talking about how great “the Christian community” is, and I would never know whether, when they said that, they were picturing a fundamentalist preacher or just … someone really into Niceness.

And meanwhile, non-Christians everywhere would side-eye Christians and say, “who are you to say you have unique access to Trying To Be Nice To People?  Seems a little pretentious to me.”  And I’d say, wait, these people actually have some real insights, and they’re actually very good at being nice, and then I’d get the response “oh, great insights like ‘gays will burn in hell forever’?” and I’d say no, I meant the other Christians, who are just people really into Niceness, and they say “but if you have to say you’re nice that’s kind of a red flag, why not just show it through action,” and I’d say … and they’d say … 

And there’s an article in Harper’s about a scary fire-and-brimstone fundamentalist pastor and people say “isn’t it kind of weird that the ‘good’ Christians don’t distance themselves from this guy?” and I’d say “well, they feel uncomfortable distancing themselves from fundamentalists because, after all, they learned a lot of important things from the Bible” and that just doesn’t quite feel sufficient, and … 

It’s mostly a psychological frustration on my part, I guess.

(via queenshulamit-deactivated201602)

The Sequences and Our Community

dataandphilosophy:

If you’re following this in real time, you’ve probably seen “Maybe the real rationalism was the friends we made along the way.” This is going to be completely incomprehensible to someone not reading in real time, probably, so I’ll just write a “The Conclusions” post for it all later.

Thesis: Even though we don’t explicitly reference them often, Tumblr LW is still informed by and draws on the material on LW, as defined here and here. These are the ones the community accepts as the big important sequences for people to read. 

Rationality Materials

The most obvious comes from Kelsey’s blog title: The Unit Of Caring, which is the name of a Yudkowsky post,

Money: The Unit of Caring

.  Other Sequences is ~ Sequences by Others, and I detail below how that is mostly our community. The Core Sequences listed I discuss below, as they are sequences. 

The Standalone posts that are recommended talk about the importance of signalling, scholarship, dist specks, predictions and calibration (which we do a little but probably less than average of), akrasia (which we all fret about), charity (which we frequently discuss), statistics (see my blog name), AI, and decision theory (these two I will grant as being distinct.)

The Sequences

“The most important technique that Less Wrong can offer you is How To Actually Change Your Mind.” We post about changing our mind, argue with each other to change our minds, introspect a lot to learn what we should change our minds about, my impression of Tumblr LW is that this is an important thing to us. We talk about the dangers of politics more than I would prefer, we talk about cached thoughts, reversed stupidity, the genetic fallacy, and motivated reasoning. Overly convenient excuses are implicitly swept away.

We don’t explicitly talk about the 37 ways words can be wrong. But it’s just so obvious to me that some people have it in the back of their heads, always poking at definitions and wondering what they mean, what they imply, what the connotations are, and trying to get the right language for things. I think that this is a common feature of this community that we don’t see as frequently outside. I think that Mysterious Answers to Mysterious Questions is similar. We’re used to tabooing language and coming up with alternate ways to explain our concepts. Reductionism, again, we have breakdowns of concepts and internal mental states here that I just don’t see elsewhere. 

Of the 11 sequences mentioned as Sequences by Others, four are by Scott (Yvain), one is by Alicorn, one is unfinished, and one is about intelligence tests and what they miss. So half the posts are either things we’ve thought about carefully or are written by community members.

In conclusion, using what seems to me to be a good measure of relatively consensus LW opinion, Tumblr LW is a reasonably close fit. We are frequently familiar with the core sequences (and whatever you hear or think about the rest, 37 ways words can be wrong is astoundingly useful for everything and particularly philosophy and politics), and if we don’t contribute many useful insights back into the main group (that I have heard of), we’re still no worse than critical theorists in IR, a fitting thing for a group on tumblr. 

I think a variable that needs to be included in this analysis – although a hard one to assess – is how widespread a given idea is outside of people who have read the Sequences.

Here’s a silly thought experiment showing why this is important: imagine a world in which the Sequences were the same as they are now, except with the addition of some quantity of “trivially true” posts or sequences expressing or explaining ideas held by almost everyone.  These might comprise anything from lists of true facts (”Tanzania is in Africa”) to correct, unexceptional expositions of things like arithmetic or high school chemistry.  (We could even imagine a great number of “trivially true” posts dedicated to more relevant topics, like a sequence covering the material you’d see in an introductory logic textbook, an uncontentious summary of the views held by various famous philosophers, or a explanation of measure theory and mathematical probability.)

We could imagine dialing up the number of such additional posts so that, say, 50% of the Sequences were trivially true, or 90%, or 99%, or whatever.

Now, one could do the sort of thing you are doing, except one would get false confirmations like “tumblr LW people sometimes use arithmetic.”  For suitably many trivially true posts, the number of these pointless comparisons might dwarf the number of meaningful comparisons.

Of course you know all this; when you say things like “We post about changing our mind, argue with each other to change our minds, introspect a lot to learn what we should change our minds about,” you mean that LW tumblr does this more than average people do.  Which is probably true.  But if you look at the set of people who do these things, what fraction of them are associated with LW?

If the Sequences included a “trivially true” sequence explaining measure theory, then rationalists would be more likely to know measure theory than the average person, because most people don’t know measure theory.  However, the number of people who know measure theory (mostly from college classes) would still dwarf the number of rationalists.  And saying “you know measure theory?  You should call yourself a rationalist!” would seem very strange.

One of the reasons I don’t call myself a rationalist is that all of the ideas in the Sequences which I agree with seem to me to be widely held by people who have never heard of LW (if perhaps not widely held enough), while all of the ideas distinctive to the Sequences are ones I disagree with.  If you are to judge a rationalist by whether they broadly agree with the Sequences, then I guess I’m a rationalist – but that feels a bit to me like asking whether I “broadly agree” with a list of 999 trivially true statements like “Tanzania is in Africa,” followed by the statement “God is a 22-headed serpent who lives on the moon.”

The reason I am being so much of a stick-in-the-mud about this is that what the general public hears when they read about rationality is the distinctive stuff, not the widely believed stuff.  See for instance this Harper’s article (PDF link).  It’s an annoying article written in sort of Encyclopedia Dramatica / Portal of Evil mindset, clearly trying to make its subjects sound as silly as possible.  But there is a reason it focuses on outlandish claims and practices: they’re the interesting ones!  An article about how Eliezer Yudkowsky thinks we should all reflect more on the nuances of language would put the reader to sleep, or else make them think “I agree with him, why does this guy get this kind of coverage for saying something so banal and obvious?”

The kind of stuff you are talking about is not what the average person thinks of when they hear the word “rationalist,” if they think of anything at all.  From the outside, it’s a confusing label.  If there were people who were fans of the 1000-proposition list I described earlier, we would not view them as “nice sensible people who believe that Tanzania is in Africa and so forth.”

fierceawakening:

nostalgebraist:

fierceawakening:

robertskmiles:

nostalgebraist:

fierceawakening:

Can someone explain to me what hipsters actually are? I feel like I have a vague idea how to use the term, but I don’t actually know for sure.

What is it that’s so annoying about them?

It’s a very nebulous and contested term and I’m not sure there’s a clear answer, but here’s a stab.

Traits commonly attributed to hipsters include

“snobbery about modern popular music (particularly indie rock)”

“ ‘ironic’ habits of speech and behavior, a refusal to speak plainly and sincerely”

“various specific style choices, like wearing vintage clothing, growing beards, wearing Converse shoes, and drinking Pabst Blue Ribbon”

“general ‘pretentiousness,’ in particular: boasting about one’s familiarity with writers, artists, bands, etc. that are obscure but supposedly of high artistic quality“

It’s not clear if there’s a common thread here – it could just be a subculture with certain traits that just happened to co-occur and don’t have anything to do with one another.  In fact, I think it’s more than one subculture.  The standard hipster stereotype is pretty left-wing, but there’s also a strain of conservatism or just nihilism in hipster culture, a strain flowing out of VICE Magazine and its co-founder, Gavin McInnes (McInnes is a conservative contrarian; VICE is often accused, I think rightly, of “hipster racism”).

But if there’s sort of a unified theory of what these people are doing and why they are “annoying,” it’s something like “they fetishize authenticity, but instead of just being ‘authentic’ by telling you what they mean, they develop this whole complicated set of behaviors that have associations with ‘authenticity’.  They’re aggressively playing a role called ‘not playing a role.’  It’s hypocritical.”

It’s easy to see how an authenticity fetish could be involved in the “I like this obscure band” thing – it’s an interest in things that are “pure” because they have not yet been “corrupted” by popularity, in people who are simply expressing themselves without consciously thinking about their audience.

The connection between authenticity and irony is more counter-intuitive, since they seem almost like opposites, but I think it’s there.  It is harder to accuse someone of being inaunthentic if it is never clear how sincere they are being.  Speaking and behaving in obscure ways fits into a sort of “mysterious outsider hermit” archetype.  You can’t accuse the hipster of pandering to anyone because it’s never clear why they are doing what they are doing.

Some of the stylistic choices have a clear relation to authenticity – for instance, hipsters adopted PBR because it was (ostensibly, though I’ve heard this is not really true) a favorite beer of the white working class.  Trucker hats are a common hipster thing for the same reason.  it’s been suggested that the appeal of vintage clothing is a yearning for a “simpler” time, when people were more straightforward.

Running through all of this is the adoption of things that signify “direct self-expression” without necessarily being direct self-expression.  The hipster drinking PBR and wearing a trucker hat isn’t doing those things because he’s from a culture where they’re what you do, but rather because he likes the (ostensible) simplicity and direct expressiveness of the white working class culture he’s taken them from.  The interest in weird obscure bands is driven by the idea that these bands are the most directly self-expressive, yet very popular bands are often popular because they express simple and common emotions, and yet you rarely hear hipsters saying “you know, Coldplay really is fun to listen to.”

To sum up, the “annoyingness” comes from the feeling that these people are putting on a mask labelled “I’m not putting on a mask.”  It’s exasperating.  One wishes these people would just actually say what they mean, rather than perform this kind of artificial stylized version of “saying what you mean.”

(Disclaimer: this was an attempt that may have failed, this is all my personal judgment, I have no special knowledge in this area, this may not be a good post)

I think that’s a lot of it, but a key component you haven’t talked about much is the sense of *superiority*. Hipsters are thought to believe that their tastes are not only more authentic than the mainstream, but also more sophisticated and discerning, in a sense that is more than just a preference for a certain aesthetic. That’s a big cause of the annoyance I think, that, on top of the things you talked about, hipsters also “think they’re better than you”.

I think that’s my question, though: do they actually, or is the term just something you call someone who dresses a certain way or has certain taste?

Some from column A, some from column B.  I’ve met people who were “hipsters” in their style of dress and music preferences but who were nice, humble people without a sense of superiority.  On the other hand, I’ve also met people who pretty much were the negative hipster cliche, including the sense of superiority.

I think it’s just one of those kinda-derogatory terms that includes a set of neutral traits and also some negative ones, even though not everyone who has the neutral traits has the negative ones.  I’m trying to think of other examples.  “Valley girl”?  “Neckbeard”?  “Stoner”?

I think the thing that’s itching my brain a bit is I keep thinking about how easy many people find it to go, like, “oh, you’re a white girl who dresses a certain way and really enjoys Starbucks coffee. You’re clearly shallow. White girls amirite?”

So I’m thinking it seems similarly easy to look at a young white guy with a beard who dresses a certain way and has certain things playing in his headphones and go, “you hipster.”

That’s more where my question is coming from – how easy is it for your label to go wide? I don’t know whether I meet the people the label catches much – I’m not sure how much PBR drinking I’ve witnessed in my life, honestly, much less from whom.

So I can’t tell if it’s pointing at an actually questionable subculture or if it’s a shorthandy way to decide people are a certain thing without knowing them.

Hence me asking.

Part of this is also probably that I’ve heard the term “hipster irony,” but I have only a vague sense of what it means. It seems to mean liking or doing something knowing it’s bad or silly, but it also seems to be a particular kind of doing that, as I think a lot of people in the world have a few “so bad it’s good” things in their lists of stuff they love.

I have only a nebulous sense of this, so I gather that whatever it is is usually deemed worthy only of withering contempt. But I’m not sure what it is.

Things like “hipster racism” I understand even less. I don’t mean it doesn’t exist or isn’t worthy of deep scorn, I just mean I’m not sure how to parse the phrase.

I said earlier that I didn’t have any special knowledge in this area, but come to think of it, I lived in Portland, OR for 5 years (supposedly one of the biggest hipster places) and now I live in Brooklyn, also a big hipster place.  I have witnessed a lot of PBR drinking in my time.  So maybe I can help.

The stuff I’m describing, about a subculture with all of these qualities, really exists.  It might not be as well defined as, I dunno, “goths” or something, but it’s there.

And I don’t really think the label “going wide” is much of a concern, at least not in the kinds of places where there are lots of “actual hipsters.”  Part of the reason for this is that the term’s ability to “go wide” is inherently limited, because part of the definition is being into obscure stuff.  Once anything becomes widely known as, say, a “hipster band,” liking that band is no longer a central hipster characteristic, because at that point the hipsters are delving deeper, looking for less mainstream things.

A guy with a beard who listens to Animal Collective (a prototypical “hipster band”) might get occasionally get called a hipster in a good-natured way, but if he just likes Animal Collective the way a normal person likes a band and doesn’t do the whole endlessly-delving-towards-more-obscure-bands thing, no one’s going to confuse him for the “genuine article.”

About “hipster irony”: IMO the distinguishing characteristic is that rather than explicitly saying “this is so bad it’s good,” the hipster deliberately makes it unclear why they enjoy things, and what percentage of that enjoyment is ironic.  Watching a kitschy sci-fi movie from the 70s or 80s would be more of a hipster thing than watching The Room, because everyone knows that people watch The Room because it’s bad, where with the kitschy sci-fi movie things would be more ambiguous.

Part of the point here is to have (as someone put it upthread) “plausible deniability” if someone calls you mainstream.  Even if the thing you like is deemed too mainstream, you can resort to the idea that you’re not enjoying it in a mainstream way.  (I once had a friend-of-a-friend who was a white hipster who was into Tyler Perry movies; I don’t know what level he was enjoying them on, but he clearly wasn’t a typical member of the Tyler Perry viewership just by virtue of being a white hipster, so the charge “you are enjoying a popular and mainstream filmmaker” bounces off of him.)

Another part of the point is evading the lower bound on how obscure you can get.  You can find a garage band who no one but you has heard of, but you can’t find a band which negative one people have heard of, so there’s a limit.  So instead of going more obscure, which you can’t do forever, you can go less obscure, but in a way that makes clear that the way you enjoy the thing is distinct from mainstream enjoyment.

If you want an example of something that lends itself very well to hipster irony, watch Wes Anderson movies.  I like Wes Anderson, but I think he’s considered a hipster filmmaker for a reason: his movies have this continual uncertainty of tone where scenes that would normally be dramatic are very understated, or characters that would normally be “serious” are given campy comedic traits.  The result is plausible deniability: you can enjoy Wes Anderson scenes dramatically while pretending you are enjoying them comedically, or vice versa, or any number of other permutations.

(Again, this is all different from stuff like watching The Room ironically, where everyone involved knows exactly what level the movie is being appreciated on.)

“Hipster racism” is basically racism + hipster irony: statements or behavior that would be racist if taken literally, but which are presented in that ambiguous way meant to give plausible deniability.  E.g. a hipster making an openly racist statement on twitter, but on a twitter feed written in the hipster irony style, full of somewhat strange statements, some of which they clearly don’t believe.  If you Google “hipster racism” there’s a bunch of writing out there about it.

To try to get back to your original question: a lot of this stuff is both real and so specific and detailed that I don’t think there’s much danger of “hipster” being misapplied as a broad stereotype.  There’s such a specific tone, and such specific interest patterns beyond just liking particular bands and so forth, that if you’ve spent some time around “real hipsters,” you aren’t going to just automatically lump the beard-and-Animal-Collective guy in with them.

(via fierceawakening)

mtthwmorelikemttheww:

I guess the best way to learn latin would be to take actual classes but like, what’s the second best way?

This is not helpful for starting out, which I did in a class, but I remember that when I had to self-study to jump ahead from “doing grammar exercises” to “reading actual Latin,” I found the oddly named book “Excelability In Advanced Latin” very useful.

The main hurdle in learning Latin is grammar; vocab is very much secondary.  Not because there isn’t a lot of vocab, but because you’ll never fully be “dictionary-free.”  A lot of famous Latin texts use specialized words, like military terminology, which you might not know even if you were a whiz at, say, reading love poetry.  So try to get the common words (mostly verbs) down, and be prepared to always have a dictionary at hand.

What’s special about the grammar, from an English speaker’s perspective, is that the grammatical role of a word is usually determined by the word’s ending, rather than its position in the sentence.  In English, “dog” is the subject in “dog bites man” because it comes first; if I said “man bites dog,” man would be the subject.  In Latin, I can put the word for “dog” wherever the hell I want, but if it’s the subject it’ll be “canis” and if it’s the object it’ll be “canem.”

For each noun, there is an ending corresponding to each grammatical role the noun could take.  The ending patterns fall into different categories called declensions.  To know which pattern of endings a noun uses, you have to know its declension.  There are 5 declensions, although the last two are pretty rare.

Adjectives follow something similar to the declension rules, but to know which pattern to use with a given adjective, you have to know the gender of the noun it’s describing.  So, for each noun, you need to know two things: declension and gender.

Verbs use different endings to indicate whether something is past, present, future, etc., just like in English.  As with nouns, the endings come in different patterns, called conjugations.

So a first start for learning Latin would look something like:

  1. Learn about the different roles a noun can take in a sentence (these are called “cases” and roughly correspond to things like “subject” and “object”), and memorize the declension patterns, or at least the 3 that are commonly used.
  2. Sub-problem of #1: learn a bit about this weird thing called “the ablative case” that doesn’t really correspond to anything we have in English.  Nouns have to be ablative when they’re following certain prepositions, so memorize those prepositions.  Ablative case also does a bunch of other stuff which you can learn later.
  3. Learn about the commonly used verb tenses, and the verb conjugations, and memorize the conjugation patterns.
  4. In the course of doing all this, try constructing some simple “dog bites man” type sentences (man bites dog, dog bit man, dog was biting man, etc.)

That’s a ton of memorization but you need to do it all to know basic Latin grammar.  You can find basic info about the above in various places online.

If you want to go more advanced you can learn about other stuff, like:

  1. The “perfect passive participle,” which is a verb form that basically means “having been X’d.”  The ancient authors LOVE this thing.  Latin texts are full of stuff about “the man, having been bitten,” etc.
  2. All the weird extra stuff the ablative case can do.
  3. Passive verbs.  They’ve got their own endings.
  4. “Deponent verbs,” which are assholes that use passive endings when they’re active, just to annoy you.
  5. The subjunctive, which is a whole other thing you can do with verbs with its own special endings (technically the subjunctive is a “mood,” and all of the verbs described above are instead in the “indicative mood”).  Like the ablative case, the subjunctive mood is used for all sorts of miscellaneous things, so you’ll have to memorize those too.
  6. Other stuff: Future active participles, gerundives, gerunds …

If you’ve actually managed to cram all that crap into your head, 1) well done and 2) you can start reading actual Latin now.  This is tough at first, because Latin authors aren’t like a textbook author and will just throw the nerdiest grammar technicalities at you all the time with no mercy.  Also, because the sentences are often really long, and word order doesn’t matter.  So every sentence you read will at first look like a big soup of disconnected words, and you have to solve a logic puzzle where you find roles for all the words in a way that accounts for all their endings.  Reading Latin consists of solving such puzzles over and over again.

If you are starting out reading real Latin texts, I recommend reading Catullus – he’s relatively easy to read, and is also a lewd, hilarious guy who writes obscene poems dissing people and stuff.  (For reading poets like, Catullus, you should probably also read a bit about poetic meter, “scansion,” elision, and related stuff.)

… I have no idea if any of that helped, and it was longer than I meant it to be, and may just sound intimidating?  Mostly I wanted to get across the progression of “1. learn basic declension and conjugation stuff, 2. learn nerdy advanced stuff, 3. read real Latin,” and how you really do have to do 1 and 2 before 3, and how there is a lot of memorization involved.