Hi,

On Fri, 7 Dec 2007, Daniel Berlin wrote:

> On 12/7/07, Giovanni Bajo <[EMAIL PROTECTED]> wrote:
> > On Fri, 2007-12-07 at 14:14 -0800, Jakub Narebski wrote:
> >
> > > > >> Is SHA a significant portion of the compute during these 
> > > > >> repacks? I should run oprofile...
> > > > > SHA1 is almost totally insignificant on x86. It hardly shows up. 
> > > > > But we have a good optimized version there. zlib tends to be a 
> > > > > lot more noticeable (especially the *uncompression*: it may be 
> > > > > faster than compression, but it's done _so_ much more that it 
> > > > > totally dominates).
> > > >
> > > > Have you considered alternatives, like: 
> > > > http://www.oberhumer.com/opensource/ucl/
> > >
> > > <quote>
> > >   As compared to LZO, the UCL algorithms achieve a better 
> > >   compression ratio but *decompression* is a little bit slower. See 
> > >   below for some rough timings.
> > > </quote>
> > >
> > > It is uncompression speed that is more important, because it is used 
> > > much more often.
> >
> > I know, but the point is not what is the fastestest, but if it's fast 
> > enough to get off the profiles. I think UCL is fast enough since it's 
> > still times faster than zlib. Anyway, LZO is GPL too, so why not 
> > considering it too. They are good libraries.
> 
> 
> At worst, you could also use fastlz (www.fastlz.org), which is faster 
> than all of these by a factor of 4 (and compression wise, is actually 
> sometimes better, sometimes worse, than LZO).

fastLZ is awfully short on details when it comes to a comparison of the 
resulting file sizes.

The only result I saw was that for the (single) example they chose, 
compressed size was 470MB as opposed to 361MB for zip's _fastest_ mode.

Really, that's not acceptable for me in the context of git.

Besides, if you change the compression algorithm you will have to add 
support for legacy clients to _recompress_ with libz.  Which most likely 
would make Sisyphos grin watching them servers.

Ciao,
Dscho

Reply via email to