From the birth of supercomputing, the metric of importance has been performance relative to operations per second, or more specifically, floating-point operations per second (FLOPS). From 1992 to 2007, for example, the performance of supercomputers running embarrassingly parallel codes, such as n-body simulations, improved nearly 10,000-fold. In that same time frame, the performance achieved for each watt of power supplied to the machine improved only 300-fold. While the latter is no small feat, it pales in comparison to the former and implies that the power supplied to supercomputers increased more than 30-fold in that same time frame. Based on this look back to the past, we identified the trend of performance at any cost, where space and power were middling concerns in the face of pure speed.