Hi,

I can't help with your issue but I can give some comments on your testing:

On Tuesday 23 October 2012 12:12:26 Tom Fernandes wrote:
> tom@hydra04 [1522]:~$ sudo dd if=/dev/zero of=/mnt/test-io/test.dd bs=1M
> count=5000
> 5000+0 records in
> 5000+0 records out
> 5242880000 bytes (5.2 GB) copied, 13.8065 s, 380 MB/s

That test is rubbish. When you want to test with dd (which only gives you the 
write-rate for sequential access) you have to use oflag=sync, otherwise you 
will see the caching of fs and disk-layer. You should test with a multiple of 
4M block-size, otherwise half your writing 1MB will be reading the other 3MB 
of the block.

> tom@hydra04 [1523]:~$ sudo nc 10.0.0.2 1234 < /mnt/test-io/test.dd
> # Summary:
> # Piped    4.88 GB in 00h00m22.37s:  223.45 MB/second
> ----------------------------------------------------------------------

I use iperf for this, easier commandline.

> ----- bonnie on local partition on hydra04 - hydra05 is the same ---------
> tom@hydra04 [1498]:~$ sudo mount /dev/vg0/test-io  /mnt/test-io/; cd
> /mnt/test-io/; time sudo bonnie -f -u root; cd -;  sudo umount /mnt/test-io/
> Using uid:0, gid:0.
> Writing intelligently...done
> Rewriting...done
> Reading intelligently...done
> start 'em...done...done...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version  1.96       ------Sequential Output------ --Sequential Input- --
> Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --
> Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
> %CP
> hydra04      96704M           495967  68 159106  20           546991  28
> 727.7 11
> Latency                         203ms     940ms               183ms    
> 186ms Version  1.96       ------Sequential Create------ --------Random
> Create--------
> hydra04             -Create-- --Read--- -Delete-- -Create-- --Read--- -
> Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP
>                  16  5425   6 +++++ +++ 20766  23 +++++ +++ +++++ +++ +++++
> +++
> Latency               478us     707us     685us     339us      18us    
> 724us
> 1.96,1.96,hydra04,1,1350881245,96704M,,,,495967,68,159106,20,,,546991,28,72
> 7.7,11,16,,,,,5425,6, +++++,+++,20766,23,+++++,+++,+++++,+++,+++++,
> +++,,203ms,940ms,,183ms,186ms,478us,707us,685us,339us,18us,724us

Another interesting test is dbench which simulates windows-sessions with 
access from word, excel and ms access. And the results of both bonnie and 
dbench are far more realistic then of simple dd, because doing long sequential 
reads is a special case only known in audio/video reproduction. And even there 
it might be that several streams are mixed and thus several different files at 
different positions are accessed interleaved.
For office, email, database, webserver its never reading big chunks 
sequentially, its aleays about reading a bit here and a bit there and be low 
latency (aka seek time) with this, not high throughput in raw bandwidth.

Have fun,

Arnold

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to