Hi,

On Monday, 18 September 2006 19:58, Stefan Seyfried wrote:
> i did some crude benchmarking on suspend write / read speed. I instrumented
> suspend.c and resume.c, basically 2-liners like
> 
> int start=time(0);
>  .... write image...
> int end=time(0); printf("%d ", end-start);getchar();
> 
> This gives only 1 second granularity of course, but this was enough for me.
> 
> The testing machine was a hp compaq nx5000, Pentium M 1600MHz, 512MB RAM,
> 8MB shared graphics memory. Harddrive is a 40GB Toshiba 2.5", the machine
> reads 23.5MB/sec from reiserfs and writes 17.5MB/sec to reiserfs (with
> dd oflag=direct ...).
> 
> I tried different combinations of the available options as well as the
> "writeout in 1% intervals instead of 20%" patch. I also tried limiting the
> image to 234045849 bytes which is exactly 45% of the total memory reported
> by free on this machine (507912kb). The idea was, that there was a bit more
> memory free for write buffering if the image was smaller than half of the
> memory.
> 
> Results (i did more than one test each, and took one of the "normal" results):
> 
>               # pages image size      write time      read time
> no compression,               54620           14 sec
> no early writeout     60080           25 sec
> 
> no compression,               54873           9 sec
> early writeout (20%)  61546           19 sec
> 
> no compression,               54727           10 sec          9 sec
> early writeout (1%)   60050           15 sec
> 
> compression           55851           18 sec          5 sec
> no early writeout     62382           25 sec          6 sec
> 
> compression           55638           18 sec          5 sec
> early writeout (1%)   62057           20 sec
> 
> ----
> 
> i was too lazy doing the correct math and transfering all the results, but
> there is one obvious result from my (unscientific) research:
> 
> - compression does not buy me much. Resuming is 5 seconds faster, suspending,
>   however, is not.

I'd say this is in agreement with the LZF documentation I read.  It says the
_decompression_ should be (almost) as fast as a "bare" read, and there's
much less data to read if they are compressed.  However, _compression_
takes time which offsets the gain resulting from the decreased amount of
data to write.

> - early writeout is faster during suspend, the gain is bigger without
>   compression (there is more buffered for the final sync which can take
>   quite some time).

Well, that makes sense to me.  With compression the early writeout has less
effect, because the drive has more time to actually write the data when the
CPU is busy.

> - early writeout is as fast with 1% steps as it is with 20% steps. It does
>   not really matter in my tests (this is why i did not retry compression with
>   20% steps).

This is what we wanted to verify. ;-)

> - making the image slightly smaller than half the RAM size of the machine
>   gives a huge boost (see the top 2 numbers. If it was relative to the image
>   size, it would have been 14 vs 16 seconds, not 14 vs 25).

If the size of the image is 45% of RAM size, the bio layer can only use 10%
of RAM (at last).  If you decease it to 40% of RAM size, the amount of RAM
available to the bio layer grows 2 times.  I think this explains the speedup
you have observed quite well. :-)

Greetings,
Rafael


-- 
You never change things by fighting the existing reality.
                R. Buckminster Fuller

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Suspend-devel mailing list
Suspend-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/suspend-devel

Reply via email to