On 02/15/2011 11:15 AM, James Masson wrote:


I wouldn't use Raid5/6

take a look at these NFS stats - my VM hosting workload is 90% write.
For my everyday VM workloads, the only time there are significant reads from 
the VM shared storage
is at VM boot time, and even then, the VM host and storage server have a 
significant parts of the VM
disks in cache.


Server rpc stats:
calls      badcalls   badauth    badclnt    xdrcall
923254581   0          0          0          0

Server nfs v3:
null         getattr      setattr      lookup       access       readlink
32        0% 761649    0% 161       0% 461       0% 112220    0% 0         0%
read         write        create       mkdir        symlink      mknod
24842856  2% 838746432 90% 183       0% 9         0% 2         0% 0         0%
remove       rmdir        rename       link         readdir      readdirplus
120       0% 1         0% 73        0% 0         0% 0         0% 289       0%
fsstat       fsinfo       pathconf     commit
10248397  1% 32        0% 0         0% 48541412  5%


Yes I guess a lot of the reading I/O will be caught be the caches so write performance becomes the main issue. I mostly threw the raid-5 in there to have some basic redundancy on the individual nodes so you don't have to do a full sync of the drbd device when a single disk dies. Raid-10 seemed wasteful to me but I guess given todays disk prices and the fact that even with 1TB HD's you'd still get 4TB of usable storage that looks like the better option. In any case I'm going to do some benchmarking on the setup once I get my hands on it to get some hard numbers.

Regards,
  Dennis
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to