Install Theme

nostalgebraist:

Is there any interesting (i.e. with non-trivial properties) way of defining metrics or measures over sets of differential equations?  (Got onto thinking about this bc of the fine-tuning in cosmology thing, and wondering if there is any way to talk about a law [i.e. equation] being more or less fine-tuned, but now I’m just curious in general)

I think I failed to communicate what I was going for in this post.  Some rambling, which may or may not help clear it up:

Another way of putting my question is, “what would a ‘space of differential equations’ be like?  Is there a way this concept could be interesting / non-trivial?”

It seems like the big question here is “what do we mean by ‘differential equation’?”  I don’t mean ODE vs. PDE or something; let’s just say they’re ODEs.  So, is “the equation” the set of solutions (or {initial data: solution} pairs)?  Is it the actual string of symbols we write on paper?  Something in between?

The “trivial” option is defining the equation by its solutions, in which case we are just left considering a space of tuples like (v, f), where v is the initial data (a vector) and f is the solution (a function).  Then a “space of ODEs” would just be some banal analysis doodad.

But that doodad wouldn’t look anything like our colloquial notion of “ODE space,” where say we categorize equations into linear or nonlinear, look at eigenvalues of linear equations, etc.  The hope in making a “space of ODEs” would be that some of the structure in this colloquial concept could be formalized, and abstracted away from the symbol strings we write down on paper.  (Thus we might, e.g., be able to construct some formal version of the colloquial notion “add a new term to the equation,” but without actually counting terms in a written equation, where you can often change the number of terms by algebraic manipulations.)

When I asked about measures and metrics in the OP, the idea was that these would be structures on a hypothetical “space of differential equations.”  But the more fundamental question is, “can we make such a space interesting, in the sense of the previous paragraph?“


Here is an example of the kind of process I’m imagining.  Imagine we’ve never heard of linear algebra, but we’ve seen systems of linear equations, and we start thinking about what a “space of systems of linear equations” would look like.

So, we are considering problems Ax = b.  Let’s say A is square and invertible, so we’re only looking at problems with unique solutions.  In the hypothetical, we don’t know linear algebra and are mostly used to thinking about these problems as wholes – i.e. we may have a concept of a matrix “A,” but we think of it as always paired with a vector “b.”

When we ask how to parameterize our space, we might first think about the entries of A and b, but soon we will be clever and realize that all our equations could equivalently be written in “solution form” x = A^{-1} b” (where our concept of “taking a matrix inverse” is “solve a problem”).  In other words, perhaps each problem just is its solution: two problems Ax = b with the same solution are really one problem, and our bad notation just makes it look like two.

This is the “trivial” option.  It gives us a “space of problems” that is straightforward (just the space of possible solutions), but it throws away all of our ideas about what problems are.

On the other hand, we could study the structure of the problem, and then we would realize that there is a lot going on in A if we look at it apart from any specific b (or x).  Then we’d develop linear algebra, which can (in various ways) formalize our intuitions about what makes problems similar or different, and can reveal a lot of structure beyond the mere numbers we write down on the page.


So I guess the analogous thing would be spaces of differential operators?

American customers are very impatient at the beginning of their wait, but their patience stabilizes after approximately 10 seconds.

A new book traces a steady stream of urine through centuries of canvases, fountains, and frescoes.

I’ve posted here a number of times about the recurring dream plot where I re-enroll in high school or college of my own volition, and can’t explain why I did this, and it’s embarrassing and feels reflective of some psychological regression or failure to launch.  I’ve even posted before about how this is such a staple plot feature that it’s sometimes just there, in the background, while the dream focuses on other things.

Last night there was a new variant.  If it’s a dream where I’ve re-enrolled in college, one of two things happens: either I start over again as a new freshman, or I just have another senior year because of all my old credits.  (I think the logic there is that I already have all the qualifications for graduation – since I, in fact, graduated – so when I re-enroll, I’m automatically eligible to graduate at the end of the year, no matter what I do.)  In the senior version, I typically just try to graduate as a physics major again (getting a pointless second physics Bachelor’s), and have to write another senior thesis (several dreams have involved my 2nd thesis going badly).

But this time, I managed to re-enroll as a senior and get my second degree in … chemistry.  Despite never having taken a single chemistry class in college.  I had done this through some sort of “trick,” like writing a physical chemistry thesis that was mostly just physics, and in the dream I did feel a bit guilty about it.  But rather than being an embarrassing waste of time like usual, this felt kind of badass – like, how bad could I really feel about getting a degree in a totally new subject in just a year?

It was like the being that writes my dreams made a mistake – this was clearly supposed to be a backstory to an anxiety dream, perhaps concocted to provide a narrative explanation for some pre-existing anxious emotion, and yet the explanation didn’t work because I was just like “oh, I did that?  Cool.”  (I don’t remember the rest of the dream story at all, but there’s a faint halo of bad vibes surrounding the hole where a memory would be, so I’m pretty sure the dream-author just came up with something else.)

chill

There is something really wonderful about the word “chill.”

Long long ago, I was telling a friend that I didn’t like certain of my parents’ behavior patterns, and doing so in very formal nerdy language full of phrases like “behavior patterns,” and after a lot of verbiage he just replied, “you wish they were more chill.”  And I said, “huh, yeah,” and he said something about how it’s great that colloquial language can be so efficiently expressive, and I nodded along, and it seemed like one of those feel-good sentiments that’s true but not all that deep, and that was that.  But maybe it was deeper than I gave it credit for?

So, a few things about “chill.”  First, the boring one: it’s a positive thing.  Describing someone as “chill” is almost always praise, and when someone tells someone else to “chill out,” they are telling them “do this good thing you aren’t currently doing.”  So far, so obvious.

But “chill” is unusual as terms of praise go.  It has a certain contextless quality; it doesn’t feel like something you can discard the moment some other value becomes more important.  Sure, you can have arguments about whether being chill is appropriate – if your house is on fire and someone tells you to chill out, you’ll probably say this isn’t the time for that.  But the very concept of “chilling out” contains the notion that we are frequently less chill than we should be – that there are lots of times when our minds are telling us our houses are metaphorically on fire, and we need to see them for the liars they are.

I’m not just talking about anxiety here, although it’s a clear-cut example of the dynamic.  The bigger point is that by treating “chill” as a generically good thing – by taking “they’re chill” as praise even if nothing else is said about “them” – we’re acknowledging that stepping back, taking a wider perspective, asking whether you maybe should chill out, is a good thing to do in virtually any situation.  Sure, sometimes you ask the question and the answer is “nope, my house is on fire.”  But you don’t get to circumvent the question entirely because the matter at hand is just so serious; that itself is un-chill.

Compare this to something like “kindness.”  Kindness is also a “generically good thing.”  But while we have the concept of kindness as generally good, we don’t have the concept of “making sure to ask whether you ought to be kind, even if it seems like you shouldn’t” as generically good.  (We could have a word like “chill” for this, but I don’t think we do.)  Chill isn’t just a state of relaxation, it’s the trait of being able to notice when relaxation is called for, even though we didn’t realize it at first.  Hence “chilling out”: if it were just a matter of having a high average level of relaxation, we wouldn’t have this special associated verb for becoming more relaxed, because there would just be relaxed people (who never have to “chill out”) and non-relaxed people.  (Back in the kindness comparison, there’s no analogous term like “kinding out.”)

This is all pretty abstract, so I should give you the concrete example that got me thinking about it, which was this @porpentine​ post:

the most important advice i give to people who write me about being in abusive activist cults / hot allostatic load situations is to dis-identify with their language and leave their universe …getting invested in that po-faced neo-1950′s pious language and the culture makes you a huge target…i don’t know if i made that clear enough in the original but yeah…then resist the urge to join some polarized faction that vaguely hates the thing that hurt you but for different stupid reasons, and make friends who are real people and know how to chill the fuck out lol

And like, I can imagine a version of this post that ends with some theoretical language about why it’s important to value a certain kind of “asking whether one should relax” in all contexts even highly fraught contexts because you see etc etc, and ends up sounding like it’s taking some “political” “position” … but porpentine just says “know how to chill the fuck out,” and we all know what that means.

Walter Russell Brain, 1st Baron Brain (23 October 1895 – 29 December 1966) was a British neurologist. He was principal author of the standard work of neurology, Brain’s Diseases of the Nervous System, and longtime editor of the homonymous neurological medical journal titled Brain.

(1) are you fucking kidding me

(2) @slatestarscratchpad you’ve probably heard about this 10000 times but if not, well

shellcollector:

I have an ongoing fascination with Amazon Dash buttons. They are little Internet Of Shit items you can stick to a wall or any other surface and push to order One Specific Product. For example, pressing this button:

image

will immediately order six tubs of Hasbro™ Confetti sprinkles multi-coloured Play-Dough™ to be delivered to your house at the next post.

They’re simultaneously

  • Deeply dystopian/absurdist, in that ‘Straight out of a satirical near-future scifi novel’ way we all love so much
  • I’m not going to lie here, really oddly or maybe not-so-oddly alluring to someone who is very disorganised and struggles to keep on top of daily life skills
  • Somehow still weirdly broken, even for that - eg the one for toilet paper can’t be used to order a normal amount of toilet paper; you have to order 48 rolls at a time. And I have never, ever been able to form a model in my head of a person who runs out of Hasbro™ Confetti sprinkles multi-coloured Play-Dough™ so often and so urgently that they need a button to push as soon as it’s getting low. I want to be clear here that you can’t order regular Play-Dough™ with the button; it is only the confetti sprinkles variety. Yet presumably someone must have bought one of these at one point. I want to find that person and ask them a lot of questions.

Other things you can order with an Amazon Dash button:

  • Mentos
  • Organic Raw Virgin Coconut Oil
  • “Eyebrow cleanser”, which I didn’t know was a thing until just now
  • Black shoe polish
  • Nerf darts
  • Tins of 36 Derwent watercolour pencils

identicaltomyself:

nostalgebraist:

identicaltomyself:

nostalgebraist:

evolution-is-just-a-theorem:

nostalgebraist:

Is there any interesting (i.e. with non-trivial properties) way of defining metrics or measures over sets of differential equations?  (Got onto thinking about this bc of the fine-tuning in cosmology thing, and wondering if there is any way to talk about a law [i.e. equation] being more or less fine-tuned, but now I’m just curious in general)

Hmmm. Convergence (in the sense that a Taylor series converges to the function it represents) is unmetrizable.

How is this handled in calculus of variations?

Also I’m not entirely sure what you’re thinking of as a criterion. Could you give an example of two very close differential equations?

No idea how it’s handled in calculus of variations.  Huh.

Here is an example inspired by the fine-tuning thing.  We have a differential equation with coefficients in front of the terms.  We talk about how “if the coefficients were a tiny bit different, the behavior of the solution would be very different.”  Now we can imagine keeping the coefficients fixed but varying the equation, by adding new small terms (i.e. slightly varying their coefficients starting at zero), or by changing an existing term (change an exponent in a continuous way, say).  Now, for some sort of change in the solution, you can talk – in a casual way at least – about how little you need to change the equation to get a “comparable” or “comparably large” change in the solution.

(We may be talking about sudden phase transition-like changes in the solution, so the changes themselves may not be continuous, but you might have a sense of distance for equations like “it is this much change of exponent away from some given transition”)

For any one equation, this just seems like a mildly amusing game, but could there be any regularities across many equations (or many definitions of “comparable”), so that there might be general facts?

This is almost covered by the theory of stochastic differential equations. That’s the theory of differential equations where you add a random function of time to one side. Usually the random function is “white noise”, technically known as a Wiener process, but you can pick any distribution on the space of functions that you like. The theory of these SDEs is well understood, highly applicable, and moderately beautiful.

Adding a random function of space is also a known thing. Usually people use a Markov random field, which is a generalization of a Markov process to multiple dimensions. That’s usually used to perturb a PDE, but you’re interested in perturbing an ODE. People have done that too. I remember seeing some beautiful visualization of the motion of an electron beam through a potential given by a Markov random field.

It sounds like you’re asking about adding a random function of both space and time. I’m not familiar with SDEs perturbed by a random function of both space and time, but I figure somebody must have thought about it. It seems like a reasonable generalization, now that you point it out.

Either you are misreading me, or I’m confused what this has to do with my question.  I know what SDEs are.  I’m not necessarily interested in ODEs rather than PDEs, in fact the reverse (although either is fine).  And I don’t see how SDEs or their extensions give us a metric or measure on the space of differential equations, or correspond to the sort of thing I wondered about the last paragraph of my second post.

Well, I’m confused why you’re confused. Darn this low-bandwidth no-context medium!

It seems to me that SDEs are exactly a probability distribution over differential equations, so there’s your measure that you were looking for. And the theory of SDEs tries to answer your questions like “what happens when we add small terms to this equation” or “is there a phase-transition-like change in the solution”, so it seems relared to what you’re thinking about.

SDE solutions are stochastic processes, i.e. probability distributions over trajectories, but in SDE solutions almost none of the trajectories could have been produced by a non-stochastic differential equation, because the set of  everywhere-differentiable functions has Wiener measure zero.  So I don’t see how this yields any useful measure over equations that aren’t stochastic to begin with.

… although now I realize you mention that “the stochastic term can be any distribution over the space of functions that you like.”  When I first read this, I was imagining that (like the usual Wiener process) our distribution would not depend on the solution, so (again like the usual Wiener process)  it’d be a “forcing term.”  But do you mean we can have things like dX_t = dW(x)_t where dW(x)_t is some stochastic term dependent in some complicated way on X?  I’ve never heard of this; to me “SDEs” always meant white noise, the Ito vs. Stratonovich thing, and not much else.

(via identicaltomyself)

the-moti:

nostalgebraist:

evolution-is-just-a-theorem:

nostalgebraist:

Is there any interesting (i.e. with non-trivial properties) way of defining metrics or measures over sets of differential equations?  (Got onto thinking about this bc of the fine-tuning in cosmology thing, and wondering if there is any way to talk about a law [i.e. equation] being more or less fine-tuned, but now I’m just curious in general)

Hmmm. Convergence (in the sense that a Taylor series converges to the function it represents) is unmetrizable.

How is this handled in calculus of variations?

Also I’m not entirely sure what you’re thinking of as a criterion. Could you give an example of two very close differential equations?

No idea how it’s handled in calculus of variations.  Huh.

Here is an example inspired by the fine-tuning thing.  We have a differential equation with coefficients in front of the terms.  We talk about how “if the coefficients were a tiny bit different, the behavior of the solution would be very different.”  Now we can imagine keeping the coefficients fixed but varying the equation, by adding new small terms (i.e. slightly varying their coefficients starting at zero), or by changing an existing term (change an exponent in a continuous way, say).  Now, for some sort of change in the solution, you can talk – in a casual way at least – about how little you need to change the equation to get a “comparable” or “comparably large” change in the solution.

(We may be talking about sudden phase transition-like changes in the solution, so the changes themselves may not be continuous, but you might have a sense of distance for equations like “it is this much change of exponent away from some given transition”)

For any one equation, this just seems like a mildly amusing game, but could there be any regularities across many equations (or many definitions of “comparable”), so that there might be general facts?

So the main fundamental difficulty in defining this is that, in certain regimes of the parameter space, a “tiny” change in one term can completely dominate, For instance if we have a linear equation, and we add a very tiny nonlinear term, we get completely new behavior for large values, and can get new behavior for small values over long time periods. 

Let’s first examine what happens when this difficulty is removed. Suppose that our differential equation describes the evolution of a point on a compact manifold. Then we can just view it as a vector field on that manifold. If we put a Riemannian metric on that manifold, then we have a natural metric on the space of vector fields given by the L^p norm of the difference. To prove a theorem of the form “close vector fields have similar behavior for short times”, it looks like we also need some control on the derivatives of at least one of the two vector fields.

What if the vector fields are on two different spaces? If we have metrics on both spaces, and they are diffeomorphic, we can measure a minimum over all  diffeomorphisms of some measure of the distortion of the metric + some measure of the distortion of the vector field.

However, I don’t see a reasonable measure of difference for vector fields on metricless smooth manifolds or vector fields on different spaces. 

Passing now to the more standard O.D.E.s in some number of real variables, we can view them as vector fields on R^n, and then the obvious thing to do is integrate the difference against some rapidly decaying function. Then the difference would only be finite for vector fields that don’t grow too rapidly. Ideally, this function represents some kind of probability of our physical system being in different states (which must somehow be independent of the laws of physics???). 

Thanks, I really like this.  One thing that’s interesting about the vector field perspective is that in cases where I think of “a small change in the equation having a big effect on the solutions,” the corresponding change in the vector field isn’t small.  I realize that’s something of a tautology – if the solutions change a lot, then by necessity their tangent vectors change a lot – but it means vector fields are a “naming scheme” for equations that varies continuously with the sets of solutions, unlike the “naming scheme” of writing down equations in symbols.

(Simple example: when I see x’ = k*x, I think of this as having either “growing solutions” or “decaying solutions,” and switching at k=0, which make crossing zero sound like a small change that makes a big difference.  But the vector field for any nonzero k gets arbitrarily big for large enough x, so the differences as we approach zero will start to blow up in the L^p norm.

Another example: in singular perturbation problems like ϵx’’ + x’ + F(x) = 0, where the highest derivative term is multiplied by an ϵ, if you write it as a vector field (x’, x’’) on the phase space (x, x’), and retain both dimensions of the phase space even when ϵ=0 [where the problem is first order and this is normally superfluous], the expression for x’’ completely changes as we move from ϵ=0 to ϵ>0.  Again, this is as it has to be, since we know the solutions change and that means the vector field has to, but it’s nice to have the “similar equations, different solutions!” feeling just … dissolve.)

(via the-moti)