Hello Robert,

Thursday, August 24, 2006, 4:44:26 PM, you wrote:

RM> Hello Robert,

RM> Thursday, August 24, 2006, 4:25:16 PM, you wrote:

RM>> Hello Roch,

RM>> Thursday, August 24, 2006, 3:37:34 PM, you wrote:


R>>> So this is the interesting data , right ?


R>>>   1. 3510, RAID-10 using 24 disks from two enclosures, random
R>>>      optimization, 32KB stripe width, write-back, one LUN

R>>>   1.1 filebench/varmail for 60s

R>>>      a. ZFS on top of LUN, atime=off

R>>>       IO Summary:      490054 ops 8101.6 ops/s, (1246/1247 r/w) 
R>>> 39.9mb/s,    291us cpu/op,   6.1ms latency




R>>>   2. 3510, 2x (4x RAID-0(3disks)), 32KB stripe width,
R>>>      random optimization, write back. 4 R0 groups are in one enclosure
R>>>      and assigned to primary controller then another 4 R0 groups are in 
other enclosure
R>>>      and assigned to secondary controller. Then RAID-10 is created with
R>>>      mirror groups between controllers. 24x disks total as in #1.


R>>>    2.1 filebench/varmail 60s

R>>>      a. ZFS RAID-10, atime=off

R>>>       IO Summary:      379284 ops 6273.4 ops/s, (965/965 r/w) 
R>>> 30.9mb/s,    314us cpu/op,   8.0ms latency


R>>> Have you tried 1M stripes spacially in case 2 ?

R>>> -r

RM>> I did try with 128KB and 256KB stripe width - the same results
RM>> (difference less than 5%).

RM>> I haven't tested 1MB 'coz maximum for 3510 is 256KB.




RM> I've just tested with 4k - the same result.



And now I tried with creating two stripes on 3510, each with 12 disks,
32KB stripe width, each on different controller.

Then using ZFS I mirrored them.

And the result is ~6300 IOPS for the same test.


Looks like I'll go with HW raid and zfs as file system for several
reasons.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to