Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Luke,

Tuesday, August 8, 2006, 4:48:38 PM, you wrote:

LL Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the 
prefetch logic?

LL These are great results for random I/O, I wonder how the sequential I/O 
looks?

LL Of course you'll not get great results for sequential I/O on the 3510 :-)



filebench/singlestreamread v440



1. UFS, noatime, HW RAID5 6 disks, S10U2

 70MB/s

2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)

 87MB/s

3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2

 130MB/s
 

4. ZFS, atime=off, SW RAID-Z 6 disks, snv_44

 133MB/s

 

ps.
With software RAID-Z I got about 940ms/s : well, after files were
created they were all cached and ZFS almost didn't touch a disks :)

ok, I changed filesize to be well over memory size of the server and
above results are with that larger filesize.




filebench/singlestreamwrite v440

1. UFS, noatime, HW RAID-5 6 disks, S10U2

70MB/s

2. ZFS, atime=off, HW RAID-5 6 disks, S10U2 (the same lun as in #1)

52MB/s

3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2

148MB/s

4. ZFS, atime=off, SW RAID-Z 6 disks, snv_44

147MB/s


So sequential writing in ZFS on HWR5 is actually worse than UFS.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Robert,

On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:

 1. UFS, noatime, HW RAID5 6 disks, S10U2
  70MB/s
 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
  87MB/s
 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
  130MB/s
 4. ZFS, atime=off, SW RAID-Z 6 disks, snv_44
  133MB/s

Well, the UFS results are miserable, but the ZFS results aren't good - I'd
expect between 250-350MB/s from a 6-disk RAID5 with read() blocksize from
8kb to 32kb.

Most of my ZFS experiments have been with RAID10, but there were some
massive improvements to seq I/O with the fixes I mentioned - I'd expect that
this shows that they aren't in snv44.

- Luke


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Matthew,

Tuesday, August 8, 2006, 7:25:17 PM, you wrote:

MA On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
 filebench/singlestreamread v440
 
 1. UFS, noatime, HW RAID5 6 disks, S10U2
  70MB/s
 
 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
  87MB/s
 
 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
  130MB/s
  
 4. ZFS, atime=off, SW RAID-Z 6 disks, snv_44
  133MB/s

MA FYI, Streaming read performance is improved considerably by Mark's
MA prefetch fixes which are in build 45.  (However, as mentioned you will
MA soon run into the bandwidth of a single fiber channel connection.)

I will probably re-test with snv_45 (waiting for SX).

FC is not that big problem - if I will find enough time I will just
add another FC cards.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss