Is there any interesting (i.e. with non-trivial properties) way of defining metrics or measures over sets of differential equations? (Got onto thinking about this bc of the fine-tuning in cosmology thing, and wondering if there is any way to talk about a law [i.e. equation] being more or less fine-tuned, but now I’m just curious in general)
I think I failed to communicate what I was going for in this post. Some rambling, which may or may not help clear it up:
Another way of putting my question is, “what would a ‘space of differential equations’ be like? Is there a way this concept could be interesting / non-trivial?”
It seems like the big question here is “what do we mean by ‘differential equation’?” I don’t mean ODE vs. PDE or something; let’s just say they’re ODEs. So, is “the equation” the set of solutions (or {initial data: solution} pairs)? Is it the actual string of symbols we write on paper? Something in between?
The “trivial” option is defining the equation by its solutions, in which case we are just left considering a space of tuples like (v, f), where v is the initial data (a vector) and f is the solution (a function). Then a “space of ODEs” would just be some banal analysis doodad.
But that doodad wouldn’t look anything like our colloquial notion of “ODE space,” where say we categorize equations into linear or nonlinear, look at eigenvalues of linear equations, etc. The hope in making a “space of ODEs” would be that some of the structure in this colloquial concept could be formalized, and abstracted away from the symbol strings we write down on paper. (Thus we might, e.g., be able to construct some formal version of the colloquial notion “add a new term to the equation,” but without actually counting terms in a written equation, where you can often change the number of terms by algebraic manipulations.)
When I asked about measures and metrics in the OP, the idea was that these would be structures on a hypothetical “space of differential equations.” But the more fundamental question is, “can we make such a space interesting, in the sense of the previous paragraph?“
Here is an example of the kind of process I’m imagining. Imagine we’ve never heard of linear algebra, but we’ve seen systems of linear equations, and we start thinking about what a “space of systems of linear equations” would look like.
So, we are considering problems Ax = b. Let’s say A is square and invertible, so we’re only looking at problems with unique solutions. In the hypothetical, we don’t know linear algebra and are mostly used to thinking about these problems as wholes – i.e. we may have a concept of a matrix “A,” but we think of it as always paired with a vector “b.”
When we ask how to parameterize our space, we might first think about the entries of A and b, but soon we will be clever and realize that all our equations could equivalently be written in “solution form” x = A^{-1} b” (where our concept of “taking a matrix inverse” is “solve a problem”). In other words, perhaps each problem just is its solution: two problems Ax = b with the same solution are really one problem, and our bad notation just makes it look like two.
This is the “trivial” option. It gives us a “space of problems” that is straightforward (just the space of possible solutions), but it throws away all of our ideas about what problems are.
On the other hand, we could study the structure of the problem, and then we would realize that there is a lot going on in A if we look at it apart from any specific b (or x). Then we’d develop linear algebra, which can (in various ways) formalize our intuitions about what makes problems similar or different, and can reveal a lot of structure beyond the mere numbers we write down on the page.
So I guess the analogous thing would be spaces of differential operators?


