On Tue, Feb 06, 2007 at 04:50:06PM -0500, Francois Marier wrote: > Sounds like an interesting idea. The only concern I have is how much > overhead does that add to the compilation time? Since the whole > point of using ccache is to speed up compilation, we have to be > careful about adding extra overhead.
gzip, which it uses, is pretty fast nowadays, at least compared to compiling large functions/files. Years ago I benchmarked gcc and figured out compilation time of a function is roughly O(n^2 log n), where n is the number of local variables. I assume it has become slower with the addition of new optimizations and especially intraprocedural optimization. In comparison, gzipping is a linear time (O(n)) process. There are some benchmarks in http://www.gustaebel.de/lars/ccache/ : ------------------------------------------------------------ plain gcc run 729.55 sec first normal ccache run 747.12 sec second normal ccache run 92.11 sec cache size 9235 k first patched ccache run 751.92 sec second patched ccache run 92.68 sec cache size 5491 k ------------------------------------------------------------ So the gzipping has a very slight performance hit. However I would assume this is generally more than offset by more data fitting in the cache, hence increasing the frequency of cache hits. Here the reduction in cache size doesn't seem to be so dramatic as in my benchmarks; I assume this is caused by me building with debug symbols, which apparently compress well (and also has a habit of filling up the cache very fast, so compression would probably be more important). > Does the patch enable compression by default or does it simply add a > flag (or environment variable) to allow users to enable it? There's an environment variable, CCACHE_NOCOMPRESS, to turn the compression off. Of course if desired, compression could be changed to be off by default and enabled by (say) CCACHE_COMPRESS. There's also a downside of CCACHE_HARDLINK not working for compressed cache files. Sami
signature.asc
Description: Digital signature