Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
> If I'm not mistaken, a 3-way mirror is not > implemented behind the scenes in > the same way as a 3-disk raidz3. You should use a > 3-way mirror instead of a > 3-disk raidz3. RAIDZ2 requires at least 4 drives, and RAIDZ3 requires at least 5 drives. But, yes, a 3-way mirror is implemented tota

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
here are no writes in the queue). Perhaps you are saying that they act like stripes for bandwidth purposes, but not for read ops/sec? -Rob -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Saturday, August 06, 2011 11:41 AM To: Rob Cohen Cc: zfs-dis

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
> I may have RAIDZ reading wrong here. Perhaps someone > could clarify. > > For a read-only workload, does each RAIDZ drive act > like a stripe, similar to RAID5/6? Do they have > independant queues? > > It would seem that there is no escaping > read/modify/write operations for sub-block writes

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
RAIDZ has to rebuild data by reading all drives in the group, and reconstructing from parity. Mirrors simply copy a drive. Compare 3tb mirros vs. 9x3tb RAIDZ2. Mirrors: Read 3tb Write 3tb RAIDZ2: Read 24tb Reconstruct data on CPU Write 3tb In this case, RAIDZ is at least 8x slower to resilver

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Rob Cohen
I may have RAIDZ reading wrong here. Perhaps someone could clarify. For a read-only workload, does each RAIDZ drive act like a stripe, similar to RAID5/6? Do they have independant queues? It would seem that there is no escaping read/modify/write operations for sub-block writes, forcing the RA

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Rob Cohen
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than you might think, especially with hot spare(s), which I'd reccommend. When you're talking about IOPS, each stripe can support 1 simultanious user.

Re: [zfs-discuss] Large scale performance query

2011-08-04 Thread Rob Cohen
Try mirrors. You will get much better multi-user performance, and you can easily split the mirrors across enclosures. If your priority is performance over capacity, you could experiment with n-way mirros, since more mirrors will load balance reads better than more stripes. -- This message post

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2011-01-10 Thread Rob Cohen
As a follow-up, I tried a SuperMicro enclosure (SC847E26-RJBOD1). I have 3 sets of 15 drives. I got the same results when I loaded the second set of drives (15 to 30). Then, I tried changing the LSI 9200's BIOS setting for max INT 13 drives from 24 (the default) to 15. From then on, the Supe

Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
Markus, I'm pretty sure that I have the MD1000 plugged in properly, especially since the same connection works on the 9280 and Perc 6/e. It's not in split mode. Thanks for the suggestion, though. -- This message posted from opensolaris.org ___ zfs-dis

[zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
I have 15x SAS drives in a Dell MD1000 enclosure, attached to an LSI 9200-16e. This has been working well. The system is boothing off of internal drives, on a Dell SAS 6ir. I just tried to add a second storage enclosure, with 15 more SAS drives, and I got a lockup during Loading Kernel. I go

[zfs-discuss] l2arc_noprefetch

2010-11-21 Thread Rob Cohen
When running real data, as opposed to benchmarks, I notice that my l2arc stops filling, even though the majority of my reads are still going to primary storage. I'm using 5 SSDs for L2ARC, so I'd expect to get good throughput, even with sequential reads. I'd like to experiment with disabling t

Re: [zfs-discuss] zfs record size implications

2010-11-10 Thread Rob Cohen
Thanks, Richard. Your answers were very helpful. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs record size implications

2010-11-04 Thread Rob Cohen
I have read some conflicting things regarding the ZFs record size setting. Could you guys verify/correct my these statements: (These reflect my understanding, not necessarily the facts!) 1) The ZFS record size in a zvol is the unit that dedup happens at. So, for a volume that is shared to an

Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
Thanks, Ian. If I understand correctly, the performance would then drop to the same level as if I set them up as separate volumes in the first place. So, I get double the performance for 75% of my data, and equal performance for 25% of my data, and my L2ARC will adapt to my working set across b

[zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
I have a couple drive enclosures: 15x 450gb 15krpm SAS 15x 600gb 15krpm SAS I'd like to set them up like RAID10. Previously, I was using two hardware RAID10 volumes, with the 15th drive as a hot spare, in each enclosure. Using ZFS, it could be nice to make them a single volume, so that I could