Install Theme

seirye:

moonlit-seashells:

the-awesome-quotes:

Sea Slugs That Prove Aliens Already Live On Planet Earth

seirye

SCREEEEE

(via unstampableface-blog)

amaranththallium:

thepokeduck:

michaelblume:

So I notice I have a bunch of really confused and uncertain beliefs about Omega-3s.

“Omega-3s are fats and you need them to make brains correctly. If you don’t get enough of them your brain won’t work right and maybe you’ll get depressed”

“It might not be a question of enough, though. Something about ratios? Omega-6/Omega-3? And the ratio has to be just right? Or you just need to be sure there’s more Omega-3?”

“Eat wild salmon, you’ll be fine”

“Also, grass-fed beef and dairy is good too”

“And maybe tuna and spinach and broccoli? And flax?“

“Supplements never help in studies, nobody knows why”

“Pretty sure gnomes are just sneaking in whenever they study them and swapping the supplements with the placebos”

“Farmed fish have other fats in them. Omega-6s? Less good for you”

“Same with grain-fed beef”

“Eggs have omega-3s in them if the chickens are allowed to walk around and eat insects and stuff”

“Except sometimes the chickens are fed an ‘omega-3 fortified diet’? Does that even work? Won’t this just anger the gnomes?”

“Gnomes are assholes”

“Are gnomes high in Omega-3?”

“Do we have to make sure they’re ‘pasture-raised’ gnomes?”

“Omega-3s never show up on nutrition labels. Also there’s a bunch of different *kinds* of Omega-3s and those certainly don’t show up on nutrition labels and like some of them are more or less necessary for brains? Maybe?”

“Fish oil needs to be stored away from sunlight or heat or direct eye contact. It’s important to keep your sight-line below the fish oil, so the fish oil does not perceive you as a threat.”

“So basically eat wild salmon”

Does…anyone know more about this than I do? Please?

I suspect someone else will be able to answer this with more detail and facts, but here’s the gist of it, as I understand.

Lipids are what make up our cells’ membranes. Omega-3s and omega-6s are the majority of these lipids. Our cells are happiest when their membranes are made up of about 50% omega-6, and 50% omega-3. We tend to wind up consuming a lot of vegetable oil (primarily omega 6) in the american diet, because corn is cheap, etc etc.

So in order to make our cell membranes happy we need to bring the ratio of omega 3′s back up, and common sense suggests that you do this by consuming oils high in omega 3, fish oil, etc etc

Biology is complicated, studies aren’t great, people eat different amounts of vegetable oil in their diet, etc etc, makes it hard to research why this intervention may or may not be effective.

There you have it. I guess?

oooh! is someone talking about my ~special interest~ ????? :D :D :D :D

What is the difference between omega-3s and omega-6s?

Fatty acids consist of a carboxylic acid group (in orange at the left end) followed by a chain of hydrocarbon. The hydrocarbon chain might have no double bonds (in which case it is saturated), one double carbon bond (monounsaturated) or many double carbon bonds (polyunsaturated). 

One way fatty acids are classified is by noting where the first double carbon bond on the chain is, starting from the end of the chain. Omega-3 fatty acids have it starting at the third carbon atom of the chain, omega-6 fatty acids have it at the sixth from the end.

image

But there can be any combination of single bonds and double bonds after this first one from the end, so there are lots and lots of different types of omega-3s and omega-6s! As you might suspect, it is only specific ones that are biologically relevant here. 

What difference do all these double bonds make?

There are two main differences that the double bonds create at a chemical level. The first is that the double bonds are much more reactive than the single bonds, which means that the more double bonds a fatty acid has, the more quickly it will spoil and go rancid. Additionally, if you cook with it, the double bonds will react with the food. Both of these are typically bad for you from a health point of view, because your body isn’t as prepared to deal with the random chemicals that might be created, and the tend to kind of gunk up the system to some extent. Atherosclerosis seems to be partially caused by this, for example. 

The second difference is that the double bond frees up the single bonds next to it to rotate, otherwise the hydrogen atoms interlock and prevent it from rotating. This makes unsaturated fats less compact (e.g. butter, animal fats, and coconut oil are mostly saturated fats, vegetable oils, fish oils are mostly unsaturated fats), and more slippery (which is why fish have lots of them - to make them swim more efficiently). 

Generally, omega-3s are more reactive and more slippery than omega-6s, because they have more potential spaces for double bonds to be. 

How does your body use omega-3s and omega-6s?

There are two very important fatty acids that your body uses, Aracidonic acid (an omega-6) and Eicosapentaenoic acid (an omega-3)

This omega-6 fatty acid here is called Arachidonic acid. It’s actually not named after spiders (arachnids), it’s named after peanuts (L. arachis), because it’s similar to a fatty acid found in peanut oil! 

image

It’s at the head of the Arachidonic acid cascade! Unfortunately, while this might sound like a cool ride, it actually kinda sucks. You see, your body keeps some of this stuff tied up in phospholipids in the wall of every cell. When something bothers the cell, like some sort of toxin or irritant, your cell cuts it loose, starting the grand cascade! 

What is all this for? It’s the beginning of a very complex warning system for your cells to respond to dangers! First, it might get tagged with oxygen, or extra double bonds in certain places to communicate more specific things (the modified forms are called eicosanoids), and then it will diffuse to nearby cells, alerting them of the danger! The effects on these other cells are really complicated, because they need to depend on the kind of tissue the cell is part of, the specific signals getting sent, the current state of the cell, etc… etc… The overall effects are things like blisters, a bump on the head, redness, pain, fever, feeling sick and tired. These effects aren’t directly caused by whatever started the problem, they are actually part of the inflammatory response, which is a complex defense mechanisms to protect your body and cells from the bad thing. 

Obviously, you don’t want this response to just keep cascading and be on all the time, you want it to turn off eventually. There’s another fatty acid reserved for giving the anti-inflammatory signal - everything is fine again! The anti-inflammatory ones are based off of Eicosapentaenoic acid (EPA) instead of Arachidonic acid - which is an omega-3 fatty acid, and as before, there are versions tagged with oxygen (also called eicosanoids, just to be confusing) that signal extra information.

image

Keep in mind that these signalling pathways are very complex, and we are still learning how they all work, and that not everything works in the most intuitive way (and that I’m glossing over a ton of stuff here). 

So how does what I eat affect all of this?

Your body can’t actually make either of these fatty acids all the way on its own! You need some omega-6 fatty acids and omega-3 fatty acids for it to be able to make them. 

Linoleic acid is the omega-6 that your body generally creates Arachidonic acid from! It’s a major component of almost every vegetable oil, and is also found in nuts and animal fats in decent amounts. So you almost certainly get plenty of it!

Eicosapentaenoic acid (EPA) on the other hand can be created (in your body) from the omega-3 fatty acid Alpha Linoleic Acid (ALA) which is also found in vegetable oils! However, most vegetable oils contain much more linoleic acid. But even worse, your body really sucks at this conversion. Just eating a ton of flax seed (which has high ALA) isn’t going to help that much because of this. 

Now this wouldn’t be so bad, except that the ratio of dietary omega-6 to dietary omega-3  affects how much Arachidonic acid and EPA is around, which in turn skews your body’s overall inflammatory response. And unfortunately, since Arachidonic acid is easy to make from linoleic acid, which you can easily get plenty of, but EPA is hard to make, and ALA which you need to make it is less common than linoleic acid, your ratio can easily get skewed to being more inflammatory overall. 

This is really bad because being inflammatory all the time is miserable, it makes you feel sick, tired, hurt, depressed - and in the long run, it causes chronic damage like heart disease. 

Fortunately! There is a nice “hack” for getting more EPA - which is to eat it directly! Now what has EPA???? I’m sure you’ll never guess… Fish! And luckily for vegans, fish don’t actually make it themselves, but get it from algae! 

And that’s why taking fish oil,  often just called omega-3 which is somewhat misleading, has so many benefits! It takes you out of this miserable inflammatory mode, and lets your body know that everything is good again!

So what’s this I hear about fish oil supplements not actually working super well??

Remember how back at the beginning, we talked about how omega-3s have more double carbon bonds? And if you look at EPA in particular, it has a whole bunch of double bonds! 

What’s the problem? Double bonds are more reactive, which means they go bad more quickly, especially if exposed to heat, light or oxygen! This means fish oil goes bad really quickly. Even fish themselves go rancid faster than most meat. In addition to having the signalling properties messed up in some random way, the oxygenated fatty acids are often toxic :( 

From what I understand, if your omega-6 to omega-3 ratio is really out of whack, it can still be good to have even somewhat rancid fish oil, because it still gives a general anti-inflammatory response. I think this is something that is not understood very well yet, though (hence all the conflicting information). 

Antioxidants to the rescue!!!! Do you know what the whole point of antioxidants are? It’s to keep things from oxidizing and going bad like this! And guess who has to live with a bunch of highly reactive EPA in their body all the time? That’s right, fish! When you eat fish, it all comes packaged with lots of antioxidants, so it’s a lot less likely to be bad. If you can’t eat fish directly, you can buy fish oil (or EPA from other sources) that is refrigerated and in an opaque bottle, which is also less likely to be bad. (I don’t really know what sort of stuff is available here for vegans, but presumably the algae that fish eat will have antioxidants around as well).

What about brains? I came here for the brains.

The inflammatory signalling stuff is important for brains, and in particular for depression, but another omega-3 fatty acid is also v important! Docosahexaenoic acid (DHA)! 

DHA is really important for the brain, 40% of the polyunsaturated fat in the brain is DHA! When you don’t get enough of it, your brain function declines.

Like EPA, it can be made from ALA, but poorly, and like EPA, there is plenty of it in fish! Also like EPA, it goes rancid easily. 

It’s also a major component of sperm - so use that information as you will :p

What are the best sources?

Salmon seems to be the best source of both EPA and DHA! Other fish, and algae are decent sources as well. 

Corn is an especially bad source of omega-3s: 98% of polyunsaturated fat in corn oil is the omega-6 linoleic acid. Cows and chickens, like us, need dietary ALA to make EPA and DHA, so corn fed cows and chickens will have a lot less of these than grass-fed cows or insect-fed chickens ones (though in any case, beef, chicken and eggs don’t seem to be a particularly good source of omega-3s). 

Farmed fish are feed fishmeal, which is made from smaller, less-desirable fish. Which actually means the amount of omega-3 in farmed fish is more concentrated than in wild fish (but so are things like Mercury, unfortunately). However, farmed fish are increasingly being feed other things made from vegetables, which typically means they will have less omega-3 (fish ultimately get their omega-3s from algae). Overall, this doesn’t seem to be a huge deal as far as omega-3 goes - both farmed and wild fish have decent amounts. 

(via jiskblr)

culturenlifestyle:

Nature-Inspired Swirling Illustrations by James R. Eads

Los Angeles based multi-disciplinary artist and illustrator James R. Ead’s stunning illustrations are known for their unique style and technique. Following van Gogh’s signature brushstroke composed of colorful and fast moving brush strokes, Ead’s work reveals a meditative and soothing connection with nature and humanity. Both gentle and powerful, the swirling illustrations contain a surrealist and ethereal touch. 

(Source: culturenlifestyle.com, via asocratesgonemad)

crossconnectmag:

Architectural Watercolors by Luca Massone

Luca Massone was born in Genoa in 1967. In 1984 he took his art diploma at the School of Art “ N. Barabino” of Genoa. Later he graduated in architecture at Genoa University and he took the license to practice architect profession. In 2011 he decided to pursue a parallel artistic career that enhances his artistic and graphic’s abilities, showing with a very personal style views and perspectives of Genoa and of the neighborhood where he lives - Pegli.

I decided to take my artistic career after many years in which the design has helped me to understand my design ideas to other people. My drawings made ​​with ink or black ballpoint pen I realize them without bases preparatory directly on the card so as to preserve the freshness and spontaneity ‘of the stroke. In this way also the realization of fantasy landscapes, usually start drawing without knowing precisely the end result, I find it very risky but fun! 


CrossConnectMag on Facebook - a place that is definitely worth a visit!

posted by Margaret

(via skimble-shanks-the-railway-cat)

it’s been far too long: a bayes effortpost

scientiststhesis:

antisquark:

jadagul:

su3su2u1:

scientiststhesis

That last sentence is the fundamental disagreement, I’d say. To a Bayesian, the way you measure the quality of your inferences is by seeing how close that inference is to what Bayes says it should be.

And I’m saying that this is a bad way to measure the quality of an inference.  You need to measure it based on “does this inference do what I need it to do.”  The reason people use Dempster Shafer instead of Bayes in sensor fusion problems is that it matches reality better.  

The Bayesian thesis, in this case, says that when you split the data into two sets, and draw inferences about the second based on the first, then using Bayes’ Theorem on the first will always give you the best inferences about the second, and whenever it looks like some other method is doing better, it’s because you injected information into this other method that you didn’t allow Bayes.

But you just said best inference was defined by Bayes, in which case Bayesian inference is always best by definition. 

 I stipulate you don’t actually believe that, and do think you need to validate the model- in which case your choice of validation metric matters.  Should you use a validation metric informed by the problem you are working on (i.e. should you use a validation metric that accounts for how the inference will be used?).   

Well, for what it’s worth, I’m using a Dynamic Bayesian Network at my lab right now to model the production of certain chemical compounds as a function of transcriptional information from a microbial ecosystem, so at the very least I put my money where my mouth is. And the problem is being solved.

(And I’m doing exactly the thing you said, about splitting the data into two sets, naturally, that’s how you validate models :P)

So that last aside seems to indicate you don’t actually trust in Bayes- you are validating your model!  What are you using as a metric?  

Let’s say someone working with you came along with a non-Bayesian model and it turned out to work much better, based on this metric.  Would you switch to the new model, or insist that there must be some information you can put into Bayes that will let you do as well?  Is it worth your time to try various Bayesian analyses until you get the same result?  

Now, imagine a situation where you don’t have any reasonable way to construct a prior- something like ‘you have N samples from an unknown distribution F.  Estimate F’  

Frequntists have a good, non-Bayesian method to estimate F based on law of large numbers.  I don’t know a clean Bayesian way to do this.  If Bayes hits a wall at such a simple problem, why should I expect it to do perfectly elsewhere?  

So I think the useless-but-philosophical point is this.

Any admissible decision rule is equivalent to a Bayesian inference with some prior. You’ve often asked the reasonable question “Wait, if you’re picking your prior to give a frequentist result, why is that Bayesian instead of frequentist?”

But from the Bayesian perspective, your prior represents the information you have about your data that isn’t your data. (See the Gelman post I mentioned recently). So if you have reason to think that your frequentist analysis (or whatever) is outperforming your Bayes-with-uniform-prior or whatever, that is “information about the problem” that should be incorporated in your prior.

In other words–everyone agrees the hard problem of Bayesian analysis is “where do I get a prior?” The validation step isn’t to validate the idea of doing Bayesian inference, it’s to validate the prior.

So ignoring computation difficulty (which I know jack all about), a philosophical Bayesian would say, oh, you shouldn’t do a Bayesian analysis and a frequentist analysis and…; you should be doing Bayesian analyses with various priors, one of which is uniform and one of which is exponential and one of which mimics frequentism etc. And when you figure out which prior works well, that tells you what model and prior you should be using for your situation.“

In an ideal theoretical approach you don’t actually do Bayesian analysis with several different priors. You use one prior which might be a convex linear combination of several models among which you are uncertain. The most radical example of this is the Solomonoff prior which includea *all* computable models.

Yeah that’s exactly it. Bayesianism-as-a-philosophy is… pretty contentless, in practice, at least right now. We don’t have a Solomonoff prior, since it is literally uncomputable. We can try to create reasonable approximations to it, but we’ll always have P(actually it’s something I haven’t thought of) ~ 5% or so.

In practice, I don’t think there’s any disagreement between @su3su2u1 (who I can’t mention for some reason) and I. In scientific practice in the real world, we’ll probably do very similar things. Bayes just gives me the background theoretical/philosophical framework for it all.

But I agree with jadagul, the point is mostly philosophical. It’s not completely philosophical because sometimes this informs what models I’m going to try first, but well. Yeah.

(This got longer than I expected, and surely far longer than it needs to be.  Sorry about that, and sorry if it’s pedantic / belaboring points everyone knows.)

There is a subtlety here, which is both philosophical and practical (I think).  It’s that when you define an ideal you’re trying to approximate, the way you describe that ideal may produce a non-unique metric of comparison to the ideal.  Even if the ideal would be perfect if you could actually do it, you also want the ideal to have the property that “small deviations from this ideal produce small deviations from perfection.”

That’s a pretty hand-wavy statement, so here’s a concrete example.  In math, it’s possible to write down multiple infinite series that all converge to the same thing.  All of these are equally good “ideals,” so to speak – if you could actually add up all the terms, you would get the exact answer.  However, if you want to approximately compute an answer, you’re only going to be adding up a finite number of terms, so it matters how fast the series converges.

For instance, “tan(pi/4) = 1, so pi is 4 times the arctangent of 1” is a perfectly good characterization of pi.  (It’s an “ideal,” in the above language.)  In a certain sense one could say the following: “pi is 4 times the arctangent of 1, so an approximation of pi is good exactly insofar as it strives to approximate 4 times the arctangent of 1.”  But one has to be careful when interpreting that statement!  If you interpret it as “to approximate pi, I should write down a formula for 4 times the arctangent of 1, and then approximate that,” you’ll get the Leibniz formula, which converges very slowly, and isn’t useful for approximation.

Instead, if you want to approximate pi, you should use one of the known series that converges fast.  And in one sense these are perfectly consistent with the “arctangent ideal”: that is, using these series gets you closer to the number “4 times the arctangent of 1,” and one can justify them on that basis.  But compared to the Leibniz formula, they look less like an approximation of “4 times the arctangent of 1.”  If you sat down and simply thought “I want to approximate 4 times the arctangent of 1, what should I do,” your first stab would probably get you the Leibniz formula, which wouldn’t be good.

On the other hand, the series that work better are based on much less intuitive characterizations of pi.  You’d never get them by just sitting down and thinking “hmm, what’s a nice, simple, natural way to characterize the exact value of pi?”  The intuitive ideal gives you a bad practical method, and less intuitive ideals (which are equally good as ideals, i.e. they all equal pi exactly) give you better ones.

The analogy here is that “Solomonoff induction” is (or could be like) the “arctangent ideal.”  It’s definitely a clean, intuitive way to characterize ideal reasoning, such that you can look at it and immediately say “yep, if I could do that exactly I certainly would,” just as anyone who knows trigonometry can sit down and say “yep, pi sure is 4 times the arctangent of 1.”  But if you try to characterize the quality of a method by looking at it as a truncation of Solomonoff induction, there’s no guarantee that you aren’t doing the same thing as someone who uses the Leibniz formula.  In other words, “Bayesian methods” (roughly, truncations of Solomonoff induction) may be worse approximations of ideal Bayesian reasoning than certain “non-Bayesian” methods, just as truncations of the intuitive formula for “4 times the arctangent of 1″ aren’t very good approximations of the number “4 times the arctangent of 1.″

Are there reasons to think this might be true?  Well, you mention the issue of P(actually it’s something I haven’t thought of).  Solomonoff induction doesn’t have this problem, but “Bayesian methods” in the real world do, so we have to check how much this deviation from perfection costs us.

And what it costs us is basically: “if there’s a really good theory you haven’t thought of, its probability won’t go up when you observe all the great evidence in its favor, and likewise the probability of the theories you have thought of will be too high.”  (There is a paper which makes this claim formal, although I’m not entirely satisfied with the presentation.)

Now, you could justly object that this is an impossible problem to get around and that any statistical method (in the usual sense of the term) will have it.  If you have not thought of general relativity yet, then even if you have seen every observation in its favor, you won’t be able to say “I’m much more confident in GR than Newtonian gravity” (because you don’t know about GR).  At best, using the observations, you’ll become less confident in Newtonian gravity and more confident in “stuff I haven’t thought of.”  But with Bayesian methods this can sometimes go very wrong – the paper linked above gives toy examples where (say) you observe new evidence that supports a theory you haven’t thought of, and your probability for your old theory should go down to 10^(-3), but instead it stays at 0.99999.  If you want to use probabilities as degrees of belief – and, say, make decisions on their basis – this is pretty bad!  Even if you don’t have a good theory yet, you’d want to at least know not to bet super-confidently on the existing one.  You’d want to know how little you know.

(And these terrible bets would not be made by a perfect Solomonoff inductor, so this really is a case where we’re using something like the Leibniz formula – doing a naive truncation of the ideal, and ending up with a really bad approximation of the ideal.  Could we get better betting odds with another method?  Perhaps – but if so, it might not look like a “Bayesian method,” just as good series for pi may not look like a “series for 4 times the arctangent of 1.”  Strange business!  To be closer to the perfect Solomonoff inductor you may in fact need to ditch “Bayesian methods.”)


A different look at why “Bayesian methods” might not be good approximations is provided in this Cosma Shalizi post (you didn’t think I was going to let you off without one of those, did you?).  The post says a lot of stuff, but the part I’m referring to here is the idea that Bayesian updating is formally identical to the (discrete-time) Replicator equation, which models natural selection in a population of fixed types (no mutation).

If we continue the analogy, we can think of Solomonoff as a bizarre sort of “perfect evolution” in which there is no need for mutation because every possible species already exists, and so the fittest simply take over more and more of the population.  This is indeed one possible way of characterizing “ideal evolution” (like “4 times the arctangent of 1″ is a way of characterizing pi) – it would indeed do better than actual evolution, and actual evolution could be said to be good insofar as it’s approximating it.

But now, in the analogy, “Bayesian methods” are not the natural selection we know and love, but instead a type of process that says “okay, let’s not have mutation (since it isn’t there in our characterization of the ideal), but since we can’t think of all possible organisms, let’s just list all the organisms we can think of, and then let the fittest survive among those.”  This might be an okay idea, depending on what you’re trying to do, but it isn’t what got us our endless forms most beautiful, and it is surely misleading to say that this is “the only correct way to evolve organisms” because it approximates “ideal evolution.”  At least in this context, it’s obvious that your ideal is misleading you by not including mutation, and that including mutation in your practical methods might be a good idea.

(Back on the statistics side of the analogy, this would correspond to letting your hypotheses randomly mutate – that is, a genetic algorithm, which the fitness given by the conditional likelihood.  This is not generally what people have in mind when they think of “Bayesian methods,” but hey, it might actually be a better approximation of Solomonoff induction.  Shalizi speculates a little about this at the end of the post, but AFAIK he hasn’t done more work on this idea, which is a shame, because it’s really interesting.  I would be kind of surprised if no one has done this kind of thing, and would love a pointer to the literature.)


ETA: I realized I finished this post without really clarifying how I thought I was disagreeing with anyone.  I think where I disagree with scientiststhesis is that I think “ideal Bayes always works” is only a useful statement when you’re talking about really ideal Bayes, where your sample space includes every computable hypothesis.  Outside of that context, which never shows up IRL, we can’t even say that Bayes would be best even if we had perfectly formulated all the information we have into a prior.  If it’s not a prior over all computable hypotheses, it’s still just a truncation, and it might be a badly-behaved truncation.

(via scientiststhesis-at-pillowfort)

tohoya:

tohoya:

su3su2u1:

slatestarscratchpad:

su3su2u1:

slatestarscratchpad:

su3su2u1:

nostalgebraist:

slatestarscratchpad:

nostalgebraist:

Semi-serious question for people who have snorted more Robin Hanson in their lifetimes than I have: if I have preferences that are read as virtuous in my community, how do I credible convey that I actually like the things I prefer?

That is, how do I signal, to the sort of person who talks about “signaling” a lot, that while I may be signaling, I’m not “just signaling”?

I may be misunderstanding your question, but on the off chance I’m not: for signals to be credible, they should be things that are easier when you have the trait than when you don’t.

For example, all of the things that people do to signal wealth (go to nice restaurants, have nice cars, etc) are easier if you’re rich than if you aren’t. You can still do some of them if you’re poor (the guy who saves up to take his date to a nice restaurant so she thinks he’s classy) but the rich can do it consistently and with less pain - if two people care about signaling wealth equally much, the rich person will end up with the better-looking signals

(if the rich person is spending just as much effort saving up to impress his date by taking her to a nice restaurant, he can choose one heck of a restaurant).

The same is true of signaling intelligence. A stupid person can read one or two books that they can then bring up at every opportunity, but it would be hard for them to have read as many books as a truly intelligent person unless they’re far more committed than that person is.

So signals are cheaper (ie you can get more signaling per unit effort expended) if you actually have the quality.

A “signal that you’re not just signaling” doesn’t make sense - if it existed, everyone (including the people who are just signaling) would send it. Instead, you signal much the same way as everyone else, but you do it better, because if everyone’s applying equal effort then your effort goes further.

Also countersignaling, I guess.

Neither of your examples are strictly about preferences, though, which is what I’m concerned with.  With wealth or (arguably! ha!) intelligence there are more concrete tests that could be done.  With preferences I can be like “look, I like this kind of book, look at how I spend a lot of time reading it, why would I do that if I didn’t like it” and people will still say that no one really enjoys that sort of book and it’s all signaling.  I am actually at a loss for how to proceed here.

(The same goes for any case where there is nothing concrete to signal except your own enjoyment.  It’s particularly bad for activities that tend to be solitary, like reading, or to involve no one but a single partner, like sex – I can be grinning the whole time, but John Q. Signaling Theorist isn’t going to see that.)

Yeah, I guess the reframe would be “how do I reliably signal that I’m a rich person who actually enjoys fancy restaurant food.”  Everyone just assumes I’m eating pretentiously because I’m rich.  

But I’m not huge on signalling theories in general, I think they are overfit and lack predictive power.  Everything is signalling, and knowing that everyone is signalling doesn’t help you predict what someone is actually going to do. 

I think that’s Fallacy of Gray. Even if “everything is signaling” (a claim Hanson doesn’t make - I’ve seen him say “most things are signaling”), some things are very much more signaling than others.

Can we agree that there are some things that are definitely best explained by signaling theory? Like rich people getting $100,000 watches and wearing them conspicuously at big parties? I worry that when you say “signaling theories lack predictive power”, you’re ruling out all of the obvious signaling that has excellent predictive power as “not the type of signaling theory we’re talking about” and instead No True Scotsmanning yourself to only the kind of signaling that doesn’t (assumed to be some kind of caricature of LW or Robin Hanson)

Just to give an example of how signaling theory is useful, when I was younger I got really upset that my family would give me gifts on holidays instead of money - the obvious argument being that a $50 sweater is strictly inferior to $50 which I could spend on a sweater or on something else I wanted more. I would constantly get into fights with my parents about this, they would never be able to give a good explanation, and I would suspect that they were too stupid to understand this obvious point. Once I learned signaling theory, where my parents want to signal that they know me and care about me, this made more sense.

Signaling theory only “has no predictive power” if you’re a socially well-adjusted person who naturally imports your knowledge of social rules and niceties into the discussion, at which point you find that signaling theory doesn’t explain anything except what’s “obvious” like how you should give women flowers on their birthday to show you care, and wear ties to work, etc, etc, etc. If you aren’t born with that particular skill, then without signaling theory the world makes no sense.

I think your watch example also isn’t great on predictive power- it’s a good post-hoc narrative.  I’m skeptical of post-hoc narratives because you can use them to fit any explanation.  

I recently attended a party hosted by the head of hospital medicine at a large university program.  The people attending can largely be broken into 3 categories- 1. wealthy, late career doctors who are heads of various departments. Easily making 500k+ a year.  2. Early/mid career doctors. Solid attending salaries (200k-300k or so), probably some student loan debt. 3. Residents with lots of debt and comparatively low salaries.  

My wife pointed out between half and three-fourths of the women in only 1 of these categories had an ostentatious luxury item similar to a watch (no woman we noticed in the other two categories had a similar item).  

Can you use the signalling model to predict which group?  I suggest you cannot- the signalling explanation could predict ANY of those groups having the item.  

If it were 

For 1, you could say “the richest people obviously signal their wealth.”  

If it were 2, you could say “the people who haven’t quite made it find it most important to signal their status.” 

If it were 3, you could say “signalling is forcing these poor residents to waste hard earned money in order to fit in aspirationally with a social group.”  

Care to take a guess?  What sort of odds would you put on that guess? 

Similarly, my family makes a routine habit of giving money for Christmas- are we countersignaling? What about cultures where money is the norm?  I contend cultural markers are tremendously more predictive than signalling explanations.  Signalling is just a nice after-the-fact narrative.  

Maybe we’re disagreeing on exactly how complete something has to be, in the absence of further data, to form a model?

Compare to racism (I know, I know, I couldn’t think of a less controversial example on short notice). If I assert “racism exists and explains things about society”, you could fairly say:

“I went to a party with some rich people, some middle-class people, and some poor people, and I heard a lot of racist comments from exactly one group. Which do you think was more racist? How sure are you?”

I wouldn’t be very sure. I could think of arguments for either side - maybe poor people are racist because they’re uneducated failures who need someone to look down on? Or maybe rich people are racist because they’re country club snobs who consider themselves too good for “those people”. Maybe middle-class people are racist, because the poor have enough contact with minorities to humanize them, and the rich are separate enough from minorities not to care about them, but the middle-class wants to distance themselves from them so they don’t seem poor.

Indeed, my guess is that the question “are rich or poor people more racist?” can’t be answered on that granular a level, and that in different situations and with different types of racism and in different societies we would find different answers.

Likewise, I could say “My company makes extra sure to hire minorities - are we doing reverse racism?”

None of that means that racism can’t possibly be useful as an explanation. Racism may not have “predictive power” in the sense of telling me whether a rich white person or a poor white person will be meaner to blacks, or whether the Chinese are more likely to hate Jews than the Irish are to hate Mexicans - but every so often you hear about a Klansman shooting a guy, and you think “Yeah, this seems like that racism thing”. Then you can study exactly how racism plays out in specific situations and make some vague predictions about it, although it won’t necessarily apply to every party.

I feel the same way about signaling. Just like “some people act in a way that looks racist” is much more obvious than predicting the intricacies of who does it more, so “some people do something that looks like conspicuous consumption” is more obvious than predicting exactly how it will manifest.

(your story about the doctors is interesting because it suggests there is something going on - clearly people of a certain class like status symbols and people of other classes don’t. When you say “there’s a specific cultural explanation”, I would ask you to take it further - why did the culture develop in this way? What factors cause a culture to develop in this way rather than in other ways? I would be surprised if signaling had nothing to do with it)

(I predict it was the mid-salary people with the status symbols, but I have low confidence. Interested in hearing the answer.)

It was the residents, and they were all carrying specific very expensive hand bags.  But, in this case it might well not be signalling- I later found out via my wife facebook stalking- one of the residents got married recently, had a ton of bridesmaids, and gave expensive hand bags as a bridesmaid gift.  So maybe it was signalling to the other residents “look, I got to be in the fancy wedding.” (See, anything can be signalling).  

Anyway I want my models to predict something- “racism exists” won’t predict whether rich people or poor people are more racist (it has nothing to say on the matter).  It might instead predict the outcome of job interviews between various candidates of different races, but lets not get into a race discussion.  

The signalling models all seem too weak to make any concrete predictions because they are very vague.  Between signalling, counter-signalling, counter-counter-signalling,etc I can craft a signalling narrative for literally any data or behavior you give me.  It’s not even wrong, it’s a just-so story.  

  I’m deeply suspicious of explanatory narratives in general, because I’ve had too many experiences where I do some analysis for work and I say “oh, that result makes sense because X,Y and Z.” And then a day later I find a bug, and the real result turns out to be just the opposite.  But then I say “oh, that result makes sense because of A,B and C.”  

But claiming that all of those things are signaling DOES allow one to avoid anthropomorphising humans by claiming that all such actions are either cynical or driven solely by evolutionary imperatives.  It’s frustrating when they do so (and when challenged, they’ll frequently retreat to the motte that “it’s descriptive, not normative,” despite their claims having little logical but much normative content), because, as you say, one can construct a signalling narrative based on essentially any data.

Some further thoughts: I don’t think signalling is entirely worthless as an analytical tool, but I do think it’s overused by rationalists.  Ironically, I think much of the appeal of signalling theory can be explained by signalling theory - the taking of people at face value of their motivations signals naivety, while an ability to suss out their hidden motivations signals competence and insight.  This is true whether or not the occam’s razor-approved face value interpretation is more likely than not.


I also think signaling boosters have a particular blind spot when it comes to the rhetorical effect of their prescriptions, and why they might want the explanation to be signalling, corrupting some of their thought processes.  Take the signalling explanation of the fact that most charitable donators are not Effective Altruists.  The first explanation is that they’re donating primarily in order to signal, and donating to make a wish or the american cancer foundation makes a better impression on their peers than the Against Malaria project.   The more charitable interpretation - and in my opinion, the likelier one - is that people generally give to causes that are more sympathetic, affect people most like them, or give the most warm fuzzies.  Then they’d donate to Make a Wish or the American Cancer Foundation because it’s much easier to imagine oneself as a young cancer victim in America or a kid with a terminal illness than it is to be dying in Africa.


But the signalling explanation is much more rhetorically powerful, and psychologically satisfying.  Those non-EA people are only cynically trying to signal their status, not legitimately help poor people.  We know better.  We’re better than they are.


In general, I think in this sphere people need to have an acute awareness of how much cynical explanations feel good, and discount their opinions appropriately.

[before I begin: I would really like to snip everything but the last reblog off of this gigantic post, but tumblr now doesn’t want me to be able to do that.  any advice about what to do in this sort of situation would be welcome]

I think there’s also another kind of dangerous appeal to signaling theory, but it’s not a cynical one – if anything it’s the opposite of cynical.

I think signaling theory appeals to some people because they just don’t understand a lot of what people do.  (I am one of the people who feels like this a lot.)  Often the behaviors that aren’t understood are the sort of things it’s rude to outright ask people about, so you just sort of muddle through life being baffled all the time.

And signaling provides a seemingly great general cure to this malady, because signals 1) can be arbitrary and 2) should be costly.  If you don’t understand a particular sort of speech, you can just say “oh, their ingroup has arbitrarily chosen that as a shibboleth and they’re indicating that they’re part of the group.”  If someone does something that seems actively counterproductive, you can make up something they might be signaling, and then say it’s a costly signal.  Everything incomprehensible is wiped away in one fell swoop.

This is nice, because it makes people’s behavior seem at least comprehensively motivated, if perhaps not honest or moral.  It assuages that persistent feeling that everyone is always making stupid mistakes all the time (and the consequent persistent feeling of guilt – “who am I to judge all these people?”).

The problem, of course, is that this is general and unfalsifiable and, even if it works well as therapy, won’t work well if you try to use it predictively.  Moreover, it shields you from having to realize how different other people might be from you – that their “mistakes” might be the result of vastly different preferences, that what would be “costly” to you might not be “costly” to them.  (The signaling theories of artistic taste, say, allow one to avoid having to deal with the idea that some other human might have a mind alien enough to actually like that thing you hate.)

(via academicianzex)

Aesthetics are moral judgments →

perversesheaf:

nostalgebraist:

perversesheaf:

ogingat:

ozymandias271:

ogingat:

shadowpeoplearejerks:

“The art blog Opulent Joy taught me to appreciate the soft textures; when I realized “oh! he’s appreciating a broader power spectrum than I am!” “

I clicked over to the article and found that gem up there.

I have a really intense and immediate reaction to out of place mathematics in cultural writing. Or mathematical concepts as metaphors. They automatically put me on the offensive for some reason.

Well, for one, it’s basically an explicit admission that the person speaking is not willing to address the new area of discourse on its own terms, and still feels entitled to speak about it. With math specifically, it is often a sign that someone is trying to turn their unreflective viewpoints into something rigorous, which suggests that they’re ready to systematize a set of intuitions before actually trying to figure out if they’re right.

Compare e.g. a LessWrong poster saying that they’ve “bumped up the probability that they’ll do that by 20%”.

I feel like saying that using math metaphors in cultural writing is “an explicit admission that the person speaking is not willing to address the new area of discourse on its own terms” is a fully general argument against metaphors.

Like, there’s a certain sense in which this essay is not engaging with radical feminism on its own terms; it’s using a Hayekian framework to talk about radical feminist concepts. It’s also a really fucking good essay. I don’t think it makes sense to rule out that sort of thing– concepts from one field often provide insight into others.

Okay, this seems right.

I think the best way to express my unease with that passage is with a poor analogy to Bernard Williams’s “one thought too many” essay. 

Suppose I’m in an emergency situation and need to choose between saving the life of my wife and the life of a stranger. Williams thinks there’s something wrong with a person who takes a moment to weigh the utilities, or consider the categorical imperative, or ask themselves which action would be more virtuous. A moral person just saves his damn wife. A morally good person “need not, and perhaps should not, be thinking about what is morally justifiable all the time.” To calculate utilities in that scenario is to have “one thought too many.”

(Note that the argument is not that making a snap judgment might make you practically better off, as you avoid wasting time. The argument is that if you have to explicitly think though what you would do in scenarios like that, you don’t understand morality.)

Similarly, if you look at a painting and go, “Ah! This brushwork represents a broad power spectrum. Based on that piece of information, I’m going to decide to think this painting is aesthetically pleasing,” then I would be suspicious of your ability to appreciate or understand art, whatever that means. If you need to invoke mathematical constructs to recognize beauty, then there’s something wrong with your understanding and appreciation of beauty.

That may not be precisely what Constantin is saying. In fact, I don’t think it is. But it’s in the general ballpark and perhaps someone more articulate could better describe the intuition I have.

I’m not sure I understand your point.  It seems important to me that Constantin is not describing her immediate judgment of a painting, but rather a deliberate choice to acquire a new taste.

In the realm of morality, an analogy might be something like: “I had certain moral stances towards people because of certain feelings about people, but someone convinced me that there isn’t any important difference between people and animals that should make me feel these ways about people and not animals, so I’ve tried to cultivate the same feelings about animals.”  (Obviously this wouldn’t work for everyone, but it could conceivably happen to someone.)

One can deliberately choose to acquire something which then becomes second nature when it is successfully acquired.  One can decide to develop a taste for a certain kind of painting, and then find oneself simply enjoying that sort of painting without “one thought too many,” just as one may decide to care more about animals and then find oneself simply caring about animals without “one thought too many.”

This deserves a response, so I guess I should give it one before I get off this accursed website.

I realize she is not literally saying “Ah! This brushwork represents a broad power spectrum. Based on that piece of information, I’m going to decide to think this painting is aesthetically pleasing.” That’s why I said that’s not what I think she’s saying. But the same sort of mechanical mindset seems present. There still seems to be a few thoughts too many.

I used to only like paintings with very crisp, precise textures, rather than the cloudy, fuzzy textures that show up in John Singer Sargent or Turner paintings.  The art blog Opulent Joy taught me to appreciate the soft textures; when I realized “oh! he’s appreciating a broader power spectrum than I am!” I immediately noticed that his aesthetic was like mine, but stronger — more general, more nuanced, and therefore an upgrade I would like to make.

Some more stabs at this idea.

She’s talking about choosing between and upgrading aesthetics like one might talk about cars.

I have nothing against fuzzy paintings. But if your reason for getting into them is, “Fuzziness represents a broad power spectrum, my taste corresponds to a limited power spectrum, wider power spectrums are better [more powerful], therefore I ought to train myself to like fuzzy paintings,” then that’s odd, to say the least. I hesitate to call it a “distanced” or “instrumental” way of looking at art, but those are the first words that come to mind, even though they don’t fit. It sounds like she’s playing Final Fantasy and decided it was time to upgrade her “aesthetic” stat. It’s not a mindset of “art is cool, I’d like to experience and appreciate art better.” It’s (especially in the broader context of the essay), “People ought to be morally judged on the things they value, therefore I should make sure my aesthetic is as high-level and powerful as possible.”

Now, there’s a philosophical needle I have to thread here, because I sure as hell judge people for their tastes. (Though I don’t judge them morally, I just think they have bad taste). Like, if you list HPMOR and Ayn Rand as your favorite books, you don’t understand literature. Sorry. And further, if you want to learn to appreciate art because you think it makes you a more well-rounded person or something like that, cool, I’m down with that. Yay art, yay being well-rounded. So I need to explain why I’m OK with judging people on their tastes and wanting to improve your appreciation of art because you think it makes you a better person in some sense, yet I’m not OK with her holding her version of that opinion. I don’t think I have the time or inclination to do that right now. But it has something to do with the way she judges people, the way she conceptualizes aesthetic appreciation, and sort of mechanical, quasi-instrumental thinking she exhibits. 

There’s also a palpable Randian undercurrent.

I suppose I should clear up that part of why I responded earlier was that I have met the author of the post a few times and she does not at all strike me as fitting your characterization of her on the basis of this passage.

On one level this is beside the point, since we’re largely talking about the passage as a sort of isolated “text” taken to be representative of certain tendencies.  But since I don’t think it reflects those tendencies in this case, which is the main example that has been presented, I can’t help but feel uneasy about the broader generalization.

(It’s as though someone had presented some passage as typical of [say] “writing by women,” and I could think of various reasons why one might think this was the case, but also knew that the author was male.  A unavoidable feeling of incongruity would keep butting in throughout the conversation.)

(via perversesheaf-deactivated201508)

xhxhxhx:

nostalgebraist:

xhxhxhx:

ogingat:

nostalgebraist

Oh yeah I do stuff like this all the time! [snip]

So - and I think this is part of what rock-a-la-carte was getting at, too - this “I’m going to make sure to talk about stuff I like in really unsophisticated ways so everyone knows I really like it in the same way they really like stuff they like!” thing is like nails on a chalkboard to me. (I know we’ve argued about sophistication before.) I mean it is a truly yucky spectacle for reasons I find it hard to place. I do think it invites a lot of bad behavior from others, viz. “fucking lov[ing] science”, disdaining expert consensus, etc., but that’s sort of separate from it just seeming gross to me. (Of course being honest about why you like what you like is good.)

AGREED

twee popist cult-crit stuff is so gross

Slate, Klostermann, the NYT Style Section, David Foster Wallace, David Eggers, Jonathan Safran Foer, The New Sincerity, NPR, The A.V. Club, Gawker Media, The Atlantic, Grantland, HBO, AMC, FX

serious chin-scratching essays on AAA video games, superhero blockbusters, and premium cable television – and endless roundtables and think-pieces and episode reviews

so gross

I personally share most of these taste judgments but it seems like there’s been a big drift between this and what was originally being talked about.

As I understood krwks​‘ post it wasn’t about the tone of the commentary (say, “twee”), but the fact that specific commentary about what what enjoys in a thing (which may sometimes be lowbrow) is better than generic waffle about how you’re experiencing a Great Work and look at all this Greatness wow so Enriching what a Profund Human Experience

E.g. if someone says “Rabelais is fun because he makes lots of lowbrow sexual and scatological jokes,” one could take this as the speaker trying to say “hey kids, I’m the cool English teacher, this stuff is so fun, it’s just like the hip music you kids get down to!”  But OTOH Rabelais does make lots of those jokes – that is literally what he’s most famous for – and if one is reading Rabelais and not enjoying it on that level, one could be accused of missing a lot of the point.

If I say “Don Quioxte reads like a comedy about present-day nerds,” I may seem like I’m trying to be “the cool English teacher,” but then … have you read Don Quixote?  It …  reads like a comedy about present-day nerds.  You don’t have to strain to see this.

More broadly, the principle is “talk about the specific kind of fun you’re having” and if that means sometimes talking about lowbrow fun when you’re experiencing Great Works, well, that’s just because that often happens.  I’m not sure where this connects to “twee,” which if anything is a tone that shies away from the lowbrow (no way is Safran Foer ever going to be as mean or crude or Rabelais or Cervantes).

(Dlsclaimer: I’ve only read little bits of Rabelais, he just came to mind because I was reading about him recently and because he’s a relatively clear-cut example.  Shakespeare is the standard go-to example, but there’s a language barrier problem there even for readers in English, which won’t be a problem for readers of Rabelais in modern English translations.  As for Don Quixote I’ve only read around 100 pages.)

(ETA: the chin-scratching thinkpieces about AAA video games bother me too, but it’s for reasons related to what I said above – I want to say “there is plenty of culture that has lowbrow fun and is also actually interesting, talk about that!”)

Oh, sure, but what you’re describing here – identifying the particular things one likes, rather than making unsubstantiated general comments – also looks a lot, to me, like unreflective glibness.

I’m for plain language and direct identification. I’d like to think that someone who talks about compositional mastery would go on to illustrate that. But I’d also prefer to listen to the person who has domain-relevant expertise and engages with the particulars over someone who does not.

(If you are an interesting person, you can substitute memoir for critical appreciation – that is what happened in Tom Bissell’s Extra Lives, a book that I wanted to throw against the wall – but I don’t think you should make a habit of talking at length about something without making an investment in that domain-relevant expertise.)

((Of course, you shouldn’t say ‘compositional mastery’ if you don’t know anything about composition, or can’t identify it in the text – that’d make you even more of an asshole, I agree – but if you really enjoy Mozart, and you want to talk about him, then you should … learn enough to talk about his technique?))


The other thing is that I don’t like “fun you’re having” as the all-purpose aesthetic mode. (I am 1,000 years old, please forgive me.) ‘Fun’ is an unreflective feeling. It has no ideational content. It has no subtlety. It’s not interesting to talk about.

And … I’ve always been dubious on attempting to communicate ‘fun’ to an audience through your criticism. If you can tell them why the work is interesting, you might lead them to enjoyment, or at least to appreciation. (“Hey, this is more interesting than I had thought.”)

They can discover whether Rabelais is better than Cervantes on their own time. Your role is to teach them something about Rabelais and Cervantes, so that their valuations and criticisms can have some weight and depth – so that when they talk about Rabelais and Cervantes, they won’t be talking out of their ass.

But when you say, “I’m having fun, here’s why you should be having fun,” you’re … not going to be effective, I don’t think. Does that ever work? (“Are we having fun yet?”)

Here’s what you’re actually communicating to your students: “I am superior to you, and your puny brains cannot handle me.” For security of identity, that might be retranslated as “I am a weirdo; look at me, I’m a high-school English teacher,” but it isn’t going to elicit a feeling of fun – or of enjoyment.

I don’t really disagree with any of this, but should clarify that everything here depends on whether we are talking about casual conversation or about stuff that is supposedly “edifying.”

Earlier in the conversation we were talking about “signaling,” which I took to mean we were talking about casual conversation.  Stuff like: what effect will it have if I present myself in a given way on tumblr (given that this is not an “educational” tumblr)?  I brought up the “cool English teacher” as an example of a sort of failure one could make in casual conversation, but if we’re talking about actual English teachers, everything is different.

(I do think – and I think I’m agreeing with you here – that there’s a frustrating trend in published cultural criticism toward basically “monetizing your casual conversation.”  The ideal is less that the critic is an expert and more that the critic is a very good tumblr poster, or the like.  I don’t like that either.  [I can have very good tumblr posters for free!])

(via xhxhxhx)

rainbowbarnacle:

crossconnectmag:

The Open-Impressionism of  Erin Hanson                            

Inspired by rock climbing Red Rock Canyon and the southern California desert, Hanson has since spent almost a decade painting the dramatic scenery of Utah, Nevada, Arizona and California. Erin Hanson has created a unique style of her own, bringing elements of classic impressionism together with modern expressionism and adding a dash of “plein-air style.” Her oil paintings stand out in a crowd, bringing a fresh new look to contemporary Western landscapes.

Erin’s Facebook

Check out our Facebook    Selected and Posted by Andrew

Cross connect

ooohhhh I can feel my blood pressure dropping just looking at this

*chinhands* OuO

(Source: crossconnectmag.com, via prospitianescapee)

momothefiddler:

nostalgebraist:

ogingat:

nostalgebraist:

ogingat:

jadagul:

[snip]

[snip]

Okay, something basic about the background of Field’s program is really confusing me.

The motivation seems to come from Quine’s statement, in “On What There Is,” that 

The variables of quantification, “something”, “nothing”, “everything”, range over our whole ontology, whatever it may be; and we are convicted of a particular ontological presupposition if, and only if, the alleged presuppositum has to be reckoned among the entities over which our variables range in order to render one of our affirmations true.

Which makes sense.  If you say “every x has property y” it would be weird if you didn’t think “an x” was a thing.  (Well, it seems weird at first glance, anyway.)

But how on earth is this a problem for science?  The statements of Newtonian physics (say) aren’t assertions about things that are true for (say) all numbers.  They’re about physical variables (lengths and durations), or – if you must – predictions, or whatever.  But if I expect a theory of physics to make assertions to me about numbers, I am looking for something very strange out of it.

To be specific, I’m expecting these theories to ultimately assert things like

“for every distance d (where d could be measured numerically, if you swing that way), it is true that […]”

rather than

“for any real number r, consider a distance of r units; it is true that […]”

That is, if it forces me to believe in anything, it should force me to believe in things like distances, not in numbers.  (What would it be like for a Newtonian physicists to believe in numbers but not distances?  I envision myself sitting forever beside the inert Real Line, possessing a set of physical laws which cannot be applied because there is no space or time or mass.  “Well, this theory is a set of statements about real numbers, ultimately.”  Really?  Could have fooled me!)

The failure of Field’s program, if you think it failed, should be taken to show that “if you swing that way” is not really appropriate here; i.e., there’s not really any other way to swing.

I think you have an overly restrictive view of which statements have ontological commitments - or at least a view that would be heterodox in some circles. Remember my natural history museum example? Say I’m a small child tugging at my mother’s shirtsleeves and whispering excitedly, “Those bones used to belong to a dinosaur in Brazil!” Here are some things to whose existence I seem to have committed:

  • bones
  • dinosaur
  • Brazil

To see why, see how ridiculous these sentences sound:

  • There’s no such thing as bones, and those bones used to belong to a dinosaur in Brazil!
  • There’s no such thing as dinosaurs, and those bones used to belong to a dinosaur in Brazil!
  • There’s no such thing as Brazil, and those bones used to belong to a dinosaur in Brazil!

But, unless a hard-road nominalization process is viable (and I gave some reasons in the previous post that people think it isn’t), you seem to want to put scientists in the position of saying “There’s no such things as functions and real numbers, and the state of the world at time t2 is related to the state of the world at time t1 by [some mechanism using functions and real numbers].” Embarrassingly, I don’t know enough about how scientists would talk about these things to give a good example, and it’s not fleshed out much here.

The statement in the last paragraph doesn’t seem bad to me?  Or, it’s exactly as bad as “there’s no such thing as the number 3, and here are 3 apples.”  Which sounds strange when you put it that way, but there are people – roughly, non-Platonists of various kinds – who would defend the idea that “there is no such thing as a the number 3″ (because numbers are not objects, say) even though we can count things.  The details of the phrasing matter: there certainly is a number 3 in the sense that we mean something when we say “3,” but that isn’t sufficient for there to be “such a thing as” the number 3.

(Admittedly, the statement with the 3 apples isn’t a quantification.  But we could turn it into one, like “there is no such thing as a natural number, but for any natural number n, if I have n apples, then [something],” which sounds awkward at first glance, but substantively doesn’t seem any worse than the 3 apples statement.)

ETA: in other words, it seems like the physicist’s statement is only awkward/ridiculous if Platonism is true.  But if we can just act like Platonism is uncontroversially true, then, well, goodbye Philosophy of Mathematics!

i’m tempted to argue (though i haven’t put a whole lot of thoguht into this) that apples aren’t “real” either; i mean i have a rough concept of a fuzzy set of fuzzy sets of mass that i call “apples” but if that counts as being A Real Thing, then why doesn’t 3?

granted, as noted in a couple recent posts of mine, i’m not sure what “Real” would mean, in the end, so maybe i shouldn’t be diving into this here

i dunno. until recently i would have considered it entirely possible to describe real states in terms of non-real concepts (negative dollars, for instance) but i mean if distances are Actual Things, then it seems reasonable to claim theories granting accurate predictions regarding distances are too? and i mean if that’s not the case then i don’t see why it matters if those theories have numbers in them or not, but if it is, then negative dollars seems entirely fair too (and tbh in this case i don’t see why it matters if those theories have numbers or not; it’s not like numbers have some intrinsic lack of realness that words don’t and they taint your whole physics or something?)

wow i had this clear concise post all planned out and then i got this instead so… sorry if it’s nonsensical

My attitude here is something like Structuralism – I don’t think it’s quite wrong to say that math has some sort of existence, but I think we need to think carefully about what sort of existence this might be, and I don’t think it’s going to look like an “ontology” in the usual sense we use for apples and the like.

The thing that has always bothered me about Mathematical Platonism is that it always gives me this sort of cartoon image of the Platonic “3″ just sitting there out there in Plato’s Heaven, being the essence of “threeness” (whatever that is) and sending out its rays that legitimate our statements about it by having all its properties.

But the thing is, “3″ doesn’t really have intrinsic properties apart from its role in some system or other.  (I know I’ve linked it like 4 times now but the paper “What Numbers Could Not Be” makes the argument I’m thinking of here.)  Mathematical statements are statements about certain sorts of structures that can be instantiated (or not) in physical objects in various ways; thus it seems deeply wrong to imagine that individual parts of those structures could be objects in their own right apart from the overall concept of the structure.  It’s really weird to think of, say, a given group’s identity element being a definite thing in its own right apart from the overall structure of the group.  (Does each distinct group get its own such object, or is there just one “identity element for groups” object?  More broadly, does isomorphism affect whether there are “two copies” of “the same thing” in Plato’s Heaven?  These seem like precisely the sort of issues that math was created to abstract away from, after all.)

(via momothefiddler-deactivated20160)