Hi!

> > > I'd say this is in agreement with the LZF documentation I read.  It says 
> > > the
> > > _decompression_ should be (almost) as fast as a "bare" read, and there's
> > > much less data to read if they are compressed.  However, _compression_
> > > takes time which offsets the gain resulting from the decreased amount of
> > > data to write.
> > 
> > Then Nigel is doing something clever and we do something stupid.
> 
> Well, I don't think we do anything stupid.  Of course you're free to review
> the code anyway. ;-)
> 
> Nigel may be using another version of the LZF algorithm which is optimized
> for speed.  I didn't experiment with libLZF too much, so we're just using the
> default settings.  Still AFAIR it is configurable to some extent.

Maybe we can just ask Nigel? ;-).

> > > > - early writeout is as fast with 1% steps as it is with 20% steps. It 
> > > > does
> > > >   not really matter in my tests (this is why i did not retry 
> > > > compression with
> > > >   20% steps).
> > > 
> > > This is what we wanted to verify. ;-)
> > 
> > For one machine, we'd probably need to test more machines.
> 
> Yes, certainly.

I'd go for 1% steps. If someone finds it slows his machine down, he's
the one that needs to do the benchmarking.
                                                                Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Suspend-devel mailing list
Suspend-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/suspend-devel

Reply via email to