On Tuesday, 19 September 2006 04:15, Stefan Seyfried wrote:
> On Mon, Sep 18, 2006 at 10:33:42PM +0200, Rafael J. Wysocki wrote:
>
> > > i was too lazy doing the correct math and transfering all the results, but
> > > there is one obvious result from my (unscientific) research:
> > >
> > > - compression does not buy me much. Resuming is 5 seconds faster,
> > > suspending,
> > > however, is not.
> >
> > I'd say this is in agreement with the LZF documentation I read. It says the
> > _decompression_ should be (almost) as fast as a "bare" read, and there's
> > much less data to read if they are compressed. However, _compression_
> > takes time which offsets the gain resulting from the decreased amount of
> > data to write.
>
> Then Nigel is doing something clever and we do something stupid.
Well, I don't think we do anything stupid. Of course you're free to review
the code anyway. ;-)
Nigel may be using another version of the LZF algorithm which is optimized
for speed. I didn't experiment with libLZF too much, so we're just using the
default settings. Still AFAIR it is configurable to some extent.
> Nigel gets a nice factor-of-two speedup from LZF compression on suspend and
> resume.
>
> > > - early writeout is faster during suspend, the gain is bigger without
> > > compression (there is more buffered for the final sync which can take
> > > quite some time).
> >
> > Well, that makes sense to me. With compression the early writeout has less
> > effect, because the drive has more time to actually write the data when the
> > CPU is busy.
> >
> > > - early writeout is as fast with 1% steps as it is with 20% steps. It does
> > > not really matter in my tests (this is why i did not retry compression
> > > with
> > > 20% steps).
> >
> > This is what we wanted to verify. ;-)
>
> For one machine, we'd probably need to test more machines.
Yes, certainly.
> > > - making the image slightly smaller than half the RAM size of the machine
> > > gives a huge boost (see the top 2 numbers. If it was relative to the
> > > image
> > > size, it would have been 14 vs 16 seconds, not 14 vs 25).
> >
> > If the size of the image is 45% of RAM size, the bio layer can only use 10%
> > of RAM (at last). If you decease it to 40% of RAM size, the amount of RAM
> > available to the bio layer grows 2 times. I think this explains the speedup
> > you have observed quite well. :-)
>
> it was 45% vs "unlimited", which is probably "almost 50%", but i also
> thought that the additional buffering would help.
I'm not sure what you mean here ...
> Another thing i forgot in my report: the times with the limited image were
> much more consistent (i did ~5 runs of each configuration), the variations
> were much bigger in the unlimited case. Maybe we should default to a little
> bit less than 50% of the total RAM for the image size parameter?
It _is_ reasonable to set the image size slightly below 50% of RAM, but I'm
reluctant to make it a default.
Greetings,
Rafael
--
You never change things by fighting the existing reality.
R. Buckminster Fuller
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Suspend-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/suspend-devel