Install Theme

Anonymous asked: So, in regards to big ideas I do math not physics and maybe it's slightly different. It certainly seems true that very rarely do big ideas come from trying to create "big ideas" rather than technical progress on problems but we've had a number of ideas which are very very powerful hammers are short, relatively simple, and end up having a major effect. There aren't that many of these either. Some examples are Forcing, Diagonalization, Entropy, the Fourier Transform and Cohomology.

su3su2u1-deactivated20160226:

These are, at least in hindsight, very clearly “big ideas” which revolutionized the field and made there way into virtually everybodys work. It might take a very long time to come up the body of knowledge to abstract the critical structure from and then crystallize the insight into an easily digestible form. But they really do come down to a few simple ideas, the rest is applied less broadly or Forgotten. Perelman was Very Technically strong, but also came up with a simple brilliant monovariant.

Well, let’s talk about the Fourier transform and entropy, where I know some history (because physics).  This was definitely not an attempt to sit down and create a big idea.  

Daniel Bernouilli, Clairaut, Lagrange, and Euler had used versions of discrete fourier transforms to solve vibrating string, orbit, and the heat equation as early as the 1740s. Fun historical note, Gauss was actually using the fast fourier transform algorithm in the early 1800s for interpolating along orbits.  Lagrange was probably the first to use one of these early trig series for what we’d call analysis now, studying the roots of cubics.  

But it was really Fourier’s 1809 paper on the heat equation that set out the idea that that it was a general technique and produced a general solution to the heat equation. But Fourier’s work would be seen as “physics math” at best today- in the early 1800s there weren’t yet good notions of function or integral.

It was really first Dirichlet and then Riemann that put the fourier transform on firm grounds, this by around 1850.  

So what we have is a bunch of big names, mostly creating the idea independently to solve little problems here and there (Gauss, Euler, Lagrange,etc, all doing technical work), all through the period between 1740-1800s.  Then you have Fourier recognizing that it was general, but not because he wanted to create a big general problem solving tool- rather he wanted to solve the heat equation for an arbitrary source (doing technical work).  

Then you have the giants that helped to really nail down the modern notions of function (people like Dirichlet and Riemann) as part of that technical work putting it on rigorous footing.  All together, you have 100 years from used in little technical work to somewhat clean formalization. Only then could it become a workhorse tool in math.  

I can tell a similar story about entropy.  The non-statistical entropy concept was probably first put forward by one of the Carnot’s (Lazare or Sadi) some time around the 1800s, but it was Clausius who gave it the name entropy, and Clausius’s work in kinetic theory/diffusion (build off earlier work in kinetic theory) was dancing around the ideas that would later become the statistical concept of entropy, work he did in the early 1850s. 

In the early 1860s, Maxwell developed his probabilistic distribution for gas velocities (now often called Maxwell/Botlzmann distribution), Botlzmann would generalize it by the end of the 1860s/early 1870s, and developed his H theorem/the statistical grounding of entropy.  

And then it took Gibbs running with it in the 1870s (along with Boltzmann and Maxwell who continue to work) to get to the more modern canonical ensemble type ideas.  

So again we see many great names (Carnot and Carnot,Clausius, Boltzmann, Maxwell, Gibbs,,etc) groping toward a technical problem (understanding gasses and heat).  It took almost 50 years for entropy to go from “rough idea” as put forward by Carnot to “general concept with a name.”  Ten more years before Maxwell started to grasp the probabilistic nature of what he was looking at, and another 20+ to get to the canonical ensemble.  

And it wasn’t probably until the 1940s that Shannon applied it to cryptography work during the war (which would eventually become his information theory).  

I’m not saying these aren’t big ideas, I’m saying big ideas aren’t bolts of epiphany, there are a lot of slow grinding work, building on other work, aimed at solving specific technical problems.  This stuff then generalizes and turns out to be really big interesting ideas, but that is not at all apparent to the earliest people creating the ideas, and it takes a whole generation to get it to the “big idea finish line.”