Sunday, June 17, 2007

Computer Science Pet Peeves

There's no free lunch. It's one of the most prominent principles of computer science, and signifies the trade offs one must make when designing any computational system or algorithm.

Somewhere lost in the translation of this concept from a formal theorem to more applied scenarios is the assumption of information theoretic bounds. No free lunch only applies when we are actually forced to make a trade off. In many cases, by examining reformulations of a problem, we can find a new solution which is superior to former solutions in every relevant way.

This is especially true when analyzing large complex systems. Such systems are typically built from many unrelated components, each with their own underlying assumptions. It's highly likely that existing systems of this nature can be improved across the board without having to sacrifice in any relevant aspect.

Consider the simple scenario where bubble sort is the fastest sorting algorithm we are aware of. Then some genius kid from Cornell comes along and proposes a new sorting algorithm, quick sort, which is leaps and bounds faster than bubble sort. In this case, there is only one relevant criterion: speed. So what's the catch? There's no free lunch, right? Wrong.

5 comments:

Navreet said...

bubble sort > quick sort when n is very very small.

What about parallel algorithms? I would much rather prefer a parallel algorithm over a serial one.

The other "cheap lunch" is buying new hardware. However, this will get smaller and smaller as gains in CPU speeds slow and the # of cores grow. Whereas ,the "cheap lunch" will still be enabled for very parallelized algorithms.

Now only if you could develop a very parallelizable MIP solver...

Yisong said...

Well, you can go ahead and add those as relevant criteria if you want. I was using the most simple scenario where the only criterion relevant to us is computational complexity.

John said...

On small data sets, the actual complexity of an algorithm can be dominated by the constant. There is a tradeoff in the implementation of quick sort. How smart is it to avoid the degenerate n^2 case. It if tries to be very clever and revert to heap sort and also to pick the pivot in a smart way, then bubble sort will win out on very small data sets. There is no free lunch.

Yisong said...

I guess the example I gave was a pretty poor one. I wasn't anticipating this amount of scrutiny (which is always a big mistake). You guys have good points. As I see it, my example was a poor choice for two main reasons.

The first reason is that I didn't specify any use cases, so the example was ill-posed. I implicitly assumed that we are considering arbitrary datasets. To make this more concrete, suppose we have a uniform distribution over all datasets of size at most N. Let C be the size threshold such that quick sort is faster than bubble sort for datasets larger than C. As N grows to infinity, the probability of sampling a dataset with size smaller than than C will goto 0. In this setup, quick sort will be better than bubble sort.

Of course, in real life, one doesn't really expect that kind of distribution. This brings me to my second reason. My point against the no free lunch principle is more directed towards models and assumptions rather than pure computational problems (such as sorting). When people design models, they also make simplifying assumptions -- generally for computational convenience. It's important to recognize these assumptions for what they are, and not go rampaging about the no free lunch principle when someone uses a different set of assumptions (let's assume both sets of assumptions incur the same computational cost) that fits the real world better.

A possible third reason why this was a bad example is that we understand the problem of sorting too well. I would hazard a guess that this issue I'm describing comes up far more often when we deal with problem spaces that are less understood.

And now that I've made a big deal about it all, let me conclude by saying that this is not really a big problem. Virtually all computer science people I regularly interact with understand this distinction. I also expect that most trade offs made in practice are necessary ones.

Steve said...

Man, your blog destroys mine.