On 11 April 2010 23:37, Bob Friesenhahn <bfrie...@simple.dallas.tx.us> wrote: > Yes, compression is useful. However, the cost of pushing the algorithm > close to the limit does incur costs as well. For many packages, getting 99% > of the max in 1/2 the time is a worthy tradeoff. This is similar to the > decision to use -O2 as the default GCC compiler optimization rather than > -O3.
-O2 vs -O3 is a rather different case, as it's not a straightforward time-space tradeoff. The last time I checked, -O3 still had significant bugs, so it was only worth using for time-critical code where it was worth the testing required to show that they hadn't been triggered. Assuming that has been fixed, then there's the problem that -O3 often produces code that takes fewer clock cycles to execute, but is bigger than -O2's code, so in fact it is still only worth using for critical regions. Indeed, when Apple switched to Intel, they went one further, as their system profiling showed that it was better to use -O2 only for the kernel and system libraries, and to use -Os for everything else (i.e. optimize for space), because for application code cache impact was more important than raw speed. I can find little on research for GNU/Linux systems, except a supporting instance where someone tried -O2 vs -Os for the kernel and found -O2 to be a bit better. -- http://rrt.sc3d.org