Everyone has encountered the type: that nerdy and perhaps arrogant guy down the hall who rants and raves incessantly about the elegance, purity and sheer beauty of mathematics (think Hardy). While my position was never quite that extreme, I did often find myself being quite taken with just how wonderfully mathematics fits together.

As noted earlier, I no longer consider mathematics to have intrinsic beauty. However, I cannot help but admire certain results in mathematics due to their sheer elegance (as perceived by me) and profound profoundness (a somewhat more objective measure). And so, I thought it would be fun to list some of my favorite results. In some sense, one can think of this as the math analogue to Paul Graham's essay on heroes (although I've also written a more conventional version).

My basic test for inclusion is to ask myself, "would I use this as an example of how math can be awesome in every way possible?" The answer need not be a definitive yes, although they all happen to be for the five items listed below. OK, that's enough preambling, let's get on with the show!

Central limit theorem -- I would be hard pressed to name a result from formal mathematics which impacted society more profoundly than the central limit theorem. In a nutshell, the theorem states that everything looks like a bell curve given enough (independent) samples. As such we can use the same type of confidence intervals when making measurements in a wide range of practical settings. The central limit theorem is the foundational principle which underpins basically all of modern hypothesis testing. Science stands proudly on the shoulders of this giant.

Stokes' theorem -- I first encountered Stokes' theorem in a tensor analysis course during college. Having taken calculus for years and years, I found it mind boggling that so many cherished results I'd learned in high school were (sometimes trivial) special cases of Stokes' theorem. Green's theorem? Yeap, it's a corollary. Gauss' divergence theorem? Also a special case. The fundamental theorem of calculus? Trivial. It's not every day that you can distill about a third of basic and multivariate calculus into a single result. The day I found Stokes' theorem, now that was quite a day.

Lagrange multipliers -- the method of Langrange multipliers is perhaps the most well known technique for solving and analyzing constrained optimization problems. This has immense practical importance since basically every practical problem, if appropriately formalized, can be stated as a constrained optimization problem. For example, airline companies use optimization software to help manage their daily activities, which might otherwise turn into a logistical nightmare. While various special (and extremely useful) cases such as integer and non-linear programming do not directly use Lagrange multipliers, the influence is clearly present via analysis techniques such as Fenchel's duality theorem and the KKT optimality conditions.

Bayes' Theorem -- as an aspiring rationalist, I find Bayes' theorem to be one of the most useful tools at my disposal. Bayes' theorem basically shows how we should update our belief of the world given observational data and a proper formulation of our prior belief. As one might expect, it is also a fundamental design principle in modern machine learning and artificial intelligence, since it provides a formal framework for dealing with belief and rationality (which is very important when programming such concepts on a computer). Just because we humans often have trouble formalizing our intuitions and gut feelings doesn't mean it cannot be done.

Mathematical induction -- induction is arguably the most useful proof technique for analysis problems in computer science and other disciplines which deal primarily with discrete mathematics. The basic idea is to identify some invariant property that should be true, decompose the problem into smaller pieces, and show that having the invariant hold in the subproblems implies the same in the original. This iterative decomposition is effective for analyzing very complex problems starting from fairly simple base cases. Because the proof itself is typically prescriptive, it often comes hand-in-hand with an algorithmic solution. This makes induction very attractive since it can provide not only intuitive proofs of correctness, but also guidance for algorithm design (see dynamic programming and divide and conquer).

Most of my honorable mentions are very biased towards my field of study (machine learning) and the broader area of studying principled models with interesting computational AND representational aspects. Honorable mentions: Nash equilibria, Taylor's theorem, Gödel's incompleteness theorems, Arrow's impossibility theorem, de Finetti's theorem, Hammersley-Clifford theorem, Cook-Levin theorem, max-flow min-cut theorem. Euler's theorem, boosting and the strength of weak learnability.

## 2 comments:

It's been a while since I've been immersed in theory, but I really enjoyed learning about the Fourier series. It led into my affinity for finite element analysis and different ways to represent and approximate solutions.

Yeah, I thought about including something about Fourier series, but a guy's gotta have standards for his own history =)

Post a Comment