On Mon, Sep 18, 2006 at 10:33:42PM +0200, Rafael J. Wysocki wrote:

> > i was too lazy doing the correct math and transfering all the results, but
> > there is one obvious result from my (unscientific) research:
> > 
> > - compression does not buy me much. Resuming is 5 seconds faster, 
> > suspending,
> >   however, is not.
> 
> I'd say this is in agreement with the LZF documentation I read.  It says the
> _decompression_ should be (almost) as fast as a "bare" read, and there's
> much less data to read if they are compressed.  However, _compression_
> takes time which offsets the gain resulting from the decreased amount of
> data to write.

Then Nigel is doing something clever and we do something stupid.
Nigel gets a nice factor-of-two speedup from LZF compression on suspend and
resume.

> > - early writeout is faster during suspend, the gain is bigger without
> >   compression (there is more buffered for the final sync which can take
> >   quite some time).
> 
> Well, that makes sense to me.  With compression the early writeout has less
> effect, because the drive has more time to actually write the data when the
> CPU is busy.
> 
> > - early writeout is as fast with 1% steps as it is with 20% steps. It does
> >   not really matter in my tests (this is why i did not retry compression 
> > with
> >   20% steps).
> 
> This is what we wanted to verify. ;-)

For one machine, we'd probably need to test more machines.

> > - making the image slightly smaller than half the RAM size of the machine
> >   gives a huge boost (see the top 2 numbers. If it was relative to the image
> >   size, it would have been 14 vs 16 seconds, not 14 vs 25).
> 
> If the size of the image is 45% of RAM size, the bio layer can only use 10%
> of RAM (at last).  If you decease it to 40% of RAM size, the amount of RAM
> available to the bio layer grows 2 times.  I think this explains the speedup
> you have observed quite well. :-)

it was 45% vs "unlimited", which is probably "almost 50%", but i also
thought that the additional buffering would help.
Another thing i forgot in my report: the times with the limited image were
much more consistent (i did ~5 runs of each configuration), the variations
were much bigger in the unlimited case. Maybe we should default to a little
bit less than 50% of the total RAM for the image size parameter?
-- 
Stefan Seyfried                  \ "I didn't want to write for pay. I
QA / R&D Team Mobile Devices      \ wanted to be paid for what I write."
SUSE LINUX Products GmbH, Nürnberg \                    -- Leonard Cohen

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Suspend-devel mailing list
Suspend-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/suspend-devel

Reply via email to