Hi.

i did some crude benchmarking on suspend write / read speed. I instrumented
suspend.c and resume.c, basically 2-liners like

int start=time(0);
 .... write image...
int end=time(0); printf("%d ", end-start);getchar();

This gives only 1 second granularity of course, but this was enough for me.

The testing machine was a hp compaq nx5000, Pentium M 1600MHz, 512MB RAM,
8MB shared graphics memory. Harddrive is a 40GB Toshiba 2.5", the machine
reads 23.5MB/sec from reiserfs and writes 17.5MB/sec to reiserfs (with
dd oflag=direct ...).

I tried different combinations of the available options as well as the
"writeout in 1% intervals instead of 20%" patch. I also tried limiting the
image to 234045849 bytes which is exactly 45% of the total memory reported
by free on this machine (507912kb). The idea was, that there was a bit more
memory free for write buffering if the image was smaller than half of the
memory.

Results (i did more than one test each, and took one of the "normal" results):

                # pages image size      write time      read time
no compression,         54620           14 sec
no early writeout       60080           25 sec

no compression,         54873           9 sec
early writeout (20%)    61546           19 sec

no compression,         54727           10 sec          9 sec
early writeout (1%)     60050           15 sec

compression             55851           18 sec          5 sec
no early writeout       62382           25 sec          6 sec

compression             55638           18 sec          5 sec
early writeout (1%)     62057           20 sec

----

i was too lazy doing the correct math and transfering all the results, but
there is one obvious result from my (unscientific) research:

- compression does not buy me much. Resuming is 5 seconds faster, suspending,
  however, is not.
- early writeout is faster during suspend, the gain is bigger without
  compression (there is more buffered for the final sync which can take
  quite some time).
- early writeout is as fast with 1% steps as it is with 20% steps. It does
  not really matter in my tests (this is why i did not retry compression with
  20% steps).
- making the image slightly smaller than half the RAM size of the machine
  gives a huge boost (see the top 2 numbers. If it was relative to the image
  size, it would have been 14 vs 16 seconds, not 14 vs 25).

So much for my short test, i hope this makes sense to someone :-)
-- 
Stefan Seyfried
QA / R&D Team Mobile Devices        |              "Any ideas, John?"
SUSE LINUX Products GmbH, Nürnberg  | "Well, surrounding them's out." 

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Suspend-devel mailing list
Suspend-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/suspend-devel

Reply via email to