Re: [zfs-discuss] Layout for multiple large streaming writes.

2007-03-13 Thread Carisdad

Thanks to everyone who replied to my question.  Your input is very helpful.

To clarify, I was concerned more with MTTDL than performance.  With 
either of the 7+2 or 10+2 layouts, I am able to achieve far more 
throughput than is available to me via the network.  Doing tests from 
memory on the system I can push > 550MB/s to the drives, but as of now I 
only have a 1Gb/s network interface on the box.  I should be able to add 
another 1Gb/s link shortly, but that is still far less than I can drive 
the disks to.  The major concern was weighting the increased probability 
of data loss given more drives in the raid set versus having spares 
available in the array given a 4-6hr drive replacement window 24x7.


For Richard,
   The drives are Seagate 500GB SATA drives (Not sure of the exact 
model), in an EMC Clariion CX3 enclosure.  There are 6 shelves of 15 
drives, with each drive presented as a raw lun to the server.  They are 
attached to a pair of dedicated 4Gb/s fabrics.


It was interesting to test the 7+2 and 10+2 layouts w/ zfs versus a 3+1 
hardware RAID running on the array.  Using hardware RAID we saw a ~2% 
performance improvement.  But we figured the improved MTTDL and being 
able to discover/recover from write/read errors with ZFS was well worth 
the 2% difference.


One last question, the link from przemol 
(http://sunsolve.sun.com/search/document.do?assetkey=1-9-88385-1) 
references a qlc.conf parameter, but we are running Emulex cards (emlxs 
driver), is there similar tuning that can be done with those?


Thanks again!

-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Layout for multiple large streaming writes.

2007-03-12 Thread przemolicc
On Mon, Mar 12, 2007 at 09:34:22AM +0100, Robert Milkowski wrote:
> Hello przemolicc,
> 
> Monday, March 12, 2007, 8:50:57 AM, you wrote:
> 
> ppf> On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
> >> Hello Carisdad,
> >> 
> >> Friday, March 9, 2007, 7:05:02 PM, you wrote:
> >> 
> >> C> I have a setup with a T2000 SAN attached to 90 500GB SATA drives 
> >> C> presented as individual luns to the host.  We will be sending mostly 
> >> C> large streaming writes to the filesystems over the network (~2GB/file)
> >> C> in 5/6 streams per filesystem.  Data protection is pretty important, but
> >> C> we need to have at most 25% overhead for redundancy.
> >> 
> >> C> Some options I'm considering are:
> >> C> 10 x 7+2 RAIDZ2 w/ no hotspares
> >> C> 7 x 10+2 RAIDZ2 w/ 6 spares
> >> 
> >> C> Does any one have advice relating to the performance or reliability to
> >> C> either of these?  We typically would swap out a bad drive in 4-6 hrs and
> >> C> we expect the drives to be fairly full most of the  time ~70-75% fs 
> >> C> utilization.
> >> 
> >> On x4500 with a config: 4x 9+2 RAID-z2 I get ~600MB/s logical
> >> (~700-800 with redundancy overhead). It's somewhat jumpy but it's a
> >> known bug in zfs...
> >> So in your config, assuming host/SAN/array is not a bottleneck,
> >> you should be able to write at least two times more throughput.
> 
> ppf> Look also at:
> ppf> http://sunsolve.sun.com/search/document.do?assetkey=1-9-88385-1
> ppf> where you have a way to increase the I/O and application performance on
> ppf> T2000.
> 
> 
> I was talking about x4500 not T2000.

But Carisdad mentioned T2000.

Regards
przemol

--
Jestes kierowca? To poczytaj! >>> http://link.interia.pl/f199e

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Layout for multiple large streaming writes.

2007-03-12 Thread Robert Milkowski
Hello przemolicc,

Monday, March 12, 2007, 8:50:57 AM, you wrote:

ppf> On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
>> Hello Carisdad,
>> 
>> Friday, March 9, 2007, 7:05:02 PM, you wrote:
>> 
>> C> I have a setup with a T2000 SAN attached to 90 500GB SATA drives 
>> C> presented as individual luns to the host.  We will be sending mostly 
>> C> large streaming writes to the filesystems over the network (~2GB/file)
>> C> in 5/6 streams per filesystem.  Data protection is pretty important, but
>> C> we need to have at most 25% overhead for redundancy.
>> 
>> C> Some options I'm considering are:
>> C> 10 x 7+2 RAIDZ2 w/ no hotspares
>> C> 7 x 10+2 RAIDZ2 w/ 6 spares
>> 
>> C> Does any one have advice relating to the performance or reliability to
>> C> either of these?  We typically would swap out a bad drive in 4-6 hrs and
>> C> we expect the drives to be fairly full most of the  time ~70-75% fs 
>> C> utilization.
>> 
>> On x4500 with a config: 4x 9+2 RAID-z2 I get ~600MB/s logical
>> (~700-800 with redundancy overhead). It's somewhat jumpy but it's a
>> known bug in zfs...
>> So in your config, assuming host/SAN/array is not a bottleneck,
>> you should be able to write at least two times more throughput.

ppf> Look also at:
ppf> http://sunsolve.sun.com/search/document.do?assetkey=1-9-88385-1
ppf> where you have a way to increase the I/O and application performance on
ppf> T2000.


I was talking about x4500 not T2000.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Layout for multiple large streaming writes.

2007-03-11 Thread przemolicc
On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
> Hello Carisdad,
> 
> Friday, March 9, 2007, 7:05:02 PM, you wrote:
> 
> C> I have a setup with a T2000 SAN attached to 90 500GB SATA drives 
> C> presented as individual luns to the host.  We will be sending mostly 
> C> large streaming writes to the filesystems over the network (~2GB/file)
> C> in 5/6 streams per filesystem.  Data protection is pretty important, but
> C> we need to have at most 25% overhead for redundancy.
> 
> C> Some options I'm considering are:
> C> 10 x 7+2 RAIDZ2 w/ no hotspares
> C> 7 x 10+2 RAIDZ2 w/ 6 spares
> 
> C> Does any one have advice relating to the performance or reliability to
> C> either of these?  We typically would swap out a bad drive in 4-6 hrs and
> C> we expect the drives to be fairly full most of the  time ~70-75% fs 
> C> utilization.
> 
> On x4500 with a config: 4x 9+2 RAID-z2 I get ~600MB/s logical
> (~700-800 with redundancy overhead). It's somewhat jumpy but it's a
> known bug in zfs...
> So in your config, assuming host/SAN/array is not a bottleneck,
> you should be able to write at least two times more throughput.

Look also at:
http://sunsolve.sun.com/search/document.do?assetkey=1-9-88385-1
where you have a way to increase the I/O and application performance on
T2000.

przemol

--
Jestes kierowca? To poczytaj! >>> http://link.interia.pl/f199e

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Layout for multiple large streaming writes.

2007-03-09 Thread Robert Milkowski
Hello Carisdad,

Friday, March 9, 2007, 7:05:02 PM, you wrote:

C> I have a setup with a T2000 SAN attached to 90 500GB SATA drives 
C> presented as individual luns to the host.  We will be sending mostly 
C> large streaming writes to the filesystems over the network (~2GB/file)
C> in 5/6 streams per filesystem.  Data protection is pretty important, but
C> we need to have at most 25% overhead for redundancy.

C> Some options I'm considering are:
C> 10 x 7+2 RAIDZ2 w/ no hotspares
C> 7 x 10+2 RAIDZ2 w/ 6 spares

C> Does any one have advice relating to the performance or reliability to
C> either of these?  We typically would swap out a bad drive in 4-6 hrs and
C> we expect the drives to be fairly full most of the  time ~70-75% fs 
C> utilization.

On x4500 with a config: 4x 9+2 RAID-z2 I get ~600MB/s logical
(~700-800 with redundancy overhead). It's somewhat jumpy but it's a
known bug in zfs...
So in your config, assuming host/SAN/array is not a bottleneck,
you should be able to write at least two times more throughput.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Layout for multiple large streaming writes.

2007-03-09 Thread Carisdad
I have a setup with a T2000 SAN attached to 90 500GB SATA drives 
presented as individual luns to the host.  We will be sending mostly 
large streaming writes to the filesystems over the network (~2GB/file) 
in 5/6 streams per filesystem.  Data protection is pretty important, but 
we need to have at most 25% overhead for redundancy.


Some options I'm considering are:
   10 x 7+2 RAIDZ2 w/ no hotspares
   7 x 10+2 RAIDZ2 w/ 6 spares

Does any one have advice relating to the performance or reliability to 
either of these?  We typically would swap out a bad drive in 4-6 hrs and 
we expect the drives to be fairly full most of the  time ~70-75% fs 
utilization.


Thanks in advance for any input.

-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss