Install Theme

The deal with the math book I am trying to write is:

I’m trying to explain what could roughly be called the “methods of mathematical physics” to an audience that doesn’t need to know anything beyond basic arithmetic, not even calculus.  I’m trying to do this is a way that provides as much understanding as possible without having to actually use stuff like calculus.

The reason I’m doing this is that I feel like popular exposition of physics, engineering, and the like have not been paralleled by a popular presentation of the mathematical foundations of these subjects, so that non-scientists are able to “learn” a bunch of very complicated stuff about, say, quantum mechanics or even string theory in metaphoric terms without ever going beyond the metaphors.  You can read so much stuff about quantum wavefunctions, or light being a “wave,” without anyone ever telling you exactly what a “wave” is.  I think people would feel a lot less distant from these subjects if there wasn’t that barrier of metaphor there.  And I don't really think the barrier is impassable without mathematical details.  If Brian Greene can “explain” string theory without real math, then it should be possible to explain what (say) Fourier analysis is without using too many actual equations.

So far I am using equations, but in a kind of stylized way where I use a lot of words in place of variables so people don’t have to learn notation.  I’m trying to emphasize how important sine and cosine waves are (for their properties as basic linear differential equation solutions / eigenfunctions of the derivative), while eliding the difference between the two, and stripping them of their trigonometric associations, so I gave them a new name: “the Bounce.”  This is all either going to be really helpful or just quixotic and patronizing and useless.  I dunno.

(Also, it’s very ambitious and I’ll probably never get very far, but I’m having fun so far … )

what is bayesianism? we (i) just don’t know

hot-gay-rationalist:

somervta:

nostalgebraist:

OK, apparently part of what I am going to do with this sick day, while the caffeine is still convincing me I’m not really sick, is to write this post, which I’ve had in my head for a long time but still haven’t written down.

tl;dr: after a bit of personal narrative this will turn into “reasons I’m not a Bayesian” or “reasons I don’t understand why other people are Bayesians, although maybe they have good reasons and I just haven’t heard about them yet.”

Maybe 6 years ago or so, I learned the word “Bayesian.”  This word seemed to refer to a particular philosophical position that was believed by a lot of smart people (note: me from six years ago was much more keen than current me on investing great weight in concepts like “smart people”).  Some of these people were bloggers I was reading at the time, but some of them were academics.  I knew that there were rival positions, like something called “frequentism,” but all the smart people seemed to be Bayesians.  I wanted to be a Bayesian too, but I told myself that first I should probably figure out what Bayesianism was.

I only properly tried to do this a number of years later.  I had read some popular resources about Bayesianism, but they weren’t very satisfying, so I checked out John Earman’s academic book “Bayes or Bust?” from the library and started reading it.  I didn’t get very far.  Partially this was because I was trying it to read it during my first semester of grad school while taking a full courseload and studying for imminent quals.  But partly it was because Earman’s book was full of numerous exceedingly complicated and subtle arguments both in favor of and against Bayesianism.  The amount of heavy shit — both mathematical and philosophical — I’d have to think through before reaching a position on Bayesianism was very intimidating.

But if this was the state of affairs, why were there so many Bayesians?  Had they passed through these trials by fire unscathed?  Did they have lower philosophical standards than the ones that I, perhaps quixotically, was trying to maintain?  Was there a middle road between the pop presentations, like Eliezer Yudkowksy’s — which weren’t nearly serious enough for me — and the presentations like Earman’s, which were so serious they scared me off?  And again: if this was all so hard to make sense of, whence all these Bayesians I kept meeting?

I still don’t know the answer to any of these questions.  Below, I’m going to try to talk a little bit about what Bayesianism appears to be, to me, and why it doesn’t seem to be intuitive (according to the arguments in its favor I think I actually understand, which is not all of them).

Read More

Are you familiar with things like Cox’s Theorem and Jaynes’ derivation of probability theory? In other words, people start with what seem to be fundamental principles of rational thought/belief and then prove that based on these (and sometimes certain other) assumption that you must use probabilities or something equivalent to them

I didn’t read all of this because it’s enormous, I just read parts and glossed over, but yeah, without grokking Cox’s Theorem one wouldn’t necessarily see how this makes sense.

Furthermore, there was a lot of talk about “what probability is“ (unless I misunderstood those parts) which is… a very silly thing to ask? Probability isn’t anything, that’s like asking what a blorgh is. The division between frequentists and bayesians is exactly what meaning one should ascribe to the word “probability.” When I talk about probability, I’m talking about subjective degrees of belief that obey the Cox axioms.

And then this person said that probability distributions in cases like the examples listed are “physically embodied” by those frequencies but… that’s assuming the consequent. If you assume that a frequency is a physical thing (it’s not) and that probabilities should only exclusively talk about that then sure bayesianism makes absolutely no sense. But since it turns out that probability-as-frequencies is a special case of probability-as-logic, I don’t know why anyone would be remotely interested in talking only about that subset.

And there was also some objection about how subjective degrees of belief are not a good description of human reasoning and, well, yeah? I mean, they really aren’t, probability-as-logic is prescriptive not descriptive. It’s not supposed to say how we do reason, it talks about how we ought to.

But as I said, I haven’t actually read the whole thing because it’s long, so I might be misrepresenting OP’s position here. I’ll read it later and reply on my main blog at length, possibly pointing to some other sources but… well, unless you grok Cox’s Theorem or at least believe that the axioms lead uniquely to the conclusion, this may not make much sense.

A few points:

  1. The only thing that Cox’s theorem does, AFAIK, is to convince us that if we have a set of synchronic "plausibilities,” they should obey the probability axioms.  (It does the same work as synchronic Dutch book arguments, which I mentioned briefly in the post.)  In particular, it doesn’t say anything about anything diachronic, like conditionalization.
  2. If you want to see someone more qualified than me making the same distinctions I am making here (synchronic/diachronic, Cox doesn’t get us conditionalization), so that you have evidence I’m not just some crackpot and thus have more reason to read my post, see Jonathan Weisberg’s “Varieties of Bayesianism,” available here (particularly section 3).
  3. The justification of synchronic probabilism (i.e. what Cox purportedly does, though not everyone agrees that it actually does so) is the least questionable aspect of all of this to me.  I’m willing to accept intuitively that if I should be assigning a “plausibility” to every proposition, then my “plausibilities” should obey the probability axioms.  What I am less sure of is, first, that I should be assigning plausibilities, and second, that I should update these plausibilities by conditionalization (the “Bayesian update”).
  4. I’m confused by what you mean when you say frequency is “not a physical thing.”  If I have three lemons and one apple in front of me, surely it’s a physical fact that ¾ of the things in front of me are lemons?  (Yes, it’s defined in a somewhat abstract way relative to the “brute” physical facts, but so is the fact that “there is an apple in front of me”; the only real brute physical fact is a bunch of subatomic particles or strings or whatever, and I don’t see how “¾” is less of a real physical thing than “apple.”)
  5. The reason I talk about “what probability is” in the post is not that I think it has some real pre-existing definition in the world that we simply have to find.  That would be silly!  I do it because I’m trying to remind the reader that frequencies in physical samples are not obviously the same sorts of objects as subjective probabilities, and so an intuitive update rule for the former case is not obviously intuitive for the latter.  Of course if you use the conditionalization rule in all cases, then my use of it in only some cases must look parochial and silly.  But my point is that it needs to be shown that the same conditionalization rule does work in all cases!  What matters is not what is most general, but what is correct.  "All ants are insects" could be derived as a special case of “All animals are insects” if one believed the latter, but that doesn’t make the latter any less false.
  6. I agree with you that I wasn’t very clear on the prescription vs. description issue.  Maybe I’ll write a follow-up post about that?  In short, what I was trying to say is that if we begin with an ideal of reasoning that is very far from what humans naturally do, then our approximations of it are likely to be very poor approximations.  This makes the “well, at least we’re approximating the right thing” argument unconvincing to me.  I wonder whether there might be an ideal of reasoning more natural to us (perhaps not involving assigning plausibilities to all propositions) from which we could derive, through a parallel set of arguments, another equally ideal system which we would actually be able to approximate well.

(via hot-queer-rationalist-deactivat)

On a whim, I’m trying to write a popular math/physics exposition book.  So far, I’ve written 8 pages, including a preface and the start of a first chapter.  I’m not sure if it’s any good, but I’m having fun …

Yesterday afternoon I came upon someone’s paper that looked like it had done something very similar to what I’ve been doing in research lately.  I immediately assumed, without doing more than glancing at a few equations, that everything I am doing has already been done and is thus worthless, and spent half an hour worrying about this.  I was too freaked out to be able to actually look at the paper, and simply let my imagination fill in what it “probably” contained.

This morning, I finally looked a little more closely at the paper and it turns out it’s … not actually at all like what I’m doing.

I am not exactly an ideal academic.

This is, somehow, the Math Success Song of the moment

I’ve been working on a “tricky” problem for the last week or so.  On Tuesday I spent around 8 hours working on it, getting lost in the complexities of my proposed solution methods, and then at the end of the day gave up in defeat and wrote 4 pages of LaTeX explaining why none of my ideas could possibly work and how something even more complicated must be necessary

I wasn’t even planning to work on the problem at all today, and

While pacing around my room an hour ago

I came up with a solution so simple and straightforward that I can’t imagine why it wasn’t utterly obvious before

It took 1.5 pages of handwritten notes to write it out, and I have pretty big handwriting

This is one of the reasons I like math and physics even when they’re hard: because on some level I always suspect that what appears to be “difficulty” and “complexity” is an illusion, and if I can only find the right perspective, nothing will ever be hard or non-obvious

(This also means that the better you understand any part of them, the less impressive that understanding feels.  I wonder if anyone really feels that they’re actually good at math?  As opposed to “I can do anything that’s trivial, and triviality ends where my difficulties begin”)

Basic math education emphasizes solving “math problems.”  On the basis of this, it’d be reasonable to conclude that when people say that physics and other parts of science involve “math,” and that famous scientists like Einstein were good at “math,” what they’re referring to is some sort of extremely difficult problem solving.

Really, though, most of the ingenuity in doing mathematical science isn’t in solving problems, it’s in coming up with problems that can be solved at all.  Most of the time, when you try to come up with an accurate mathematical description of something, it’s too complicated to be solved, as a “problem,” using any standard method.  The range of problems that can be solved at all is a tiny and kind of oddly shaped sliver of the vast range of possible problems.  So people end up contorting themselves into knots coming up with special approximations and simplifications that turn realistic descriptions into solvable ones without losing some basic core of realism.

A typical theoretical advance is a way of twisting your point of view until a real situation looks like one of the very few sorts of situations that we have any mathematical tools for dealing with.  All we have is a hammer, and sometimes we find an ingenious way of arguing that, from some point of view, something we’re interested in is sort of like a nail.

It’s a bit like being a person who can only make decisions if they’re phrased in terms of a certain kind of metaphor – like, say, chess metaphors.  Imagine that you could never do anything unless you’d found a way to think of your situation as a chessboard configuration, and your possible choices as “moves.”  A very small range of activities (packing suitcases, maybe?) would be relatively straightforward.  Many others would be very hard to deal with, and much of your progress would involve finding specific special cases where something in the real world is at least sort of like a chess move.  And some parts of life – some large and possibly very interesting or important parts of life – would be inaccessible to you, simply because they’re nothing like chess, or at least not in any way you’ve thought of yet.

Visual Proofs

bloodredorion:

¼ + 1/16 + 1/64 + 1/256 + … = 1/3


image


 

1/3 + 1/9 + 1/27 + 1/81 + … = ½

image


½ + ¼ + 1/8 + 1/16 + … = 1


image


1 + 2 + 3 + … + n = n * (n+1) / 2


image


1 + 3 + 5 + … + (2n − 1) = n2


image


a2 + b2 = c2

image

CITATION ( source) :

Nelsen, R. B. Proofs Without Words: Exercises in Visual Thinking. Washington, DC: Math. Assoc. Amer., 1997.

(via thededekindadafunction-deactiva)

I’m being trolled by calculus of variations and I LOVE it

I wish my brain had some sort of add-on chip that made it as easy to do an integration by parts and see its implications as it is to do things like “seeing”

There are certain math contexts where so many things are actually identical because of integrations by parts, and it’s never going to be instantly obvious because I always need to take a moment to check if the boundary terms vanish, and it’s like the universe is mocking me for not having a built-in system to check for me so I can really see the identities rather than just derive them