So this is the interesting data , right ?


  1. 3510, RAID-10 using 24 disks from two enclosures, random
     optimization, 32KB stripe width, write-back, one LUN

  1.1 filebench/varmail for 60s

     a. ZFS on top of LUN, atime=off

      IO Summary:      490054 ops 8101.6 ops/s, (1246/1247 r/w)  39.9mb/s,    
291us cpu/op,   6.1ms latency




  2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
     random optimization, write back. 4 R0 groups are in one enclosure
     and assigned to primary controller then another 4 R0 groups are in other 
enclosure
     and assigned to secondary controller. Then RAID-10 is created with
     mirror groups between controllers. 24x disks total as in #1.


   2.1 filebench/varmail 60s

     a. ZFS RAID-10, atime=off

      IO Summary:      379284 ops 6273.4 ops/s, (965/965 r/w)  30.9mb/s,    
314us cpu/op,   8.0ms latency


Have you tried 1M stripes spacially in case 2 ?

-r

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to