There's no free lunch. It's one of the most prominent principles of computer science, and signifies the trade offs one must make when designing any computational system or algorithm.
Somewhere lost in the translation of this concept from a formal theorem to more applied scenarios is the assumption of information theoretic bounds. No free lunch only applies when we are actually forced to make a trade off. In many cases, by examining reformulations of a problem, we can find a new solution which is superior to former solutions in every relevant way.
This is especially true when analyzing large complex systems. Such systems are typically built from many unrelated components, each with their own underlying assumptions. It's highly likely that existing systems of this nature can be improved across the board without having to sacrifice in any relevant aspect.
Consider the simple scenario where bubble sort is the fastest sorting algorithm we are aware of. Then some genius kid from Cornell comes along and proposes a new sorting algorithm, quick sort, which is leaps and bounds faster than bubble sort. In this case, there is only one relevant criterion: speed. So what's the catch? There's no free lunch, right? Wrong.