On Wed, Sep 16, 2009 at 08:02:35PM +0300, Markus Kovero wrote:

> It's possible to do 3-way (or more) mirrors too, so you may achieve better 
> redundancy than raidz2/3

I understand there's almost no additional performance penalty to raidz3 
over raidz2 in terms of CPU load. Is that correct?

So SSDs for ZIL/L2ARC don't bring that much when used with raidz2/raidz3,
if I write a lot, at least, and don't access the cache very much, according
to some recent posts on this list.

How much drive space am I'm losing with mirrored pools versus raidz3? IIRC
in RAID 10 it's only 10% over RAID 6, which is why I went for RAID 10 in
my 14-drive SATA (WD RE4) setup.

Let's assume I want to fill a 24-drive Supermicro chassis with 1 TByte
WD Caviar Black or 2 TByte RE4 drives, and use 4x X25-M 80 GByte
2nd gen Intel consumer drives, mirrored, each pair as ZIL/L2ARC
for the 24 SATA drives behind them. Let's assume CPU is not an issue,
with dual-socket Nehalems and 24 GByte RAM or more. There are applications
packaged in Solaris containers running on the same box, however.

Let's say the workload is mostly multiple streams (hundreds to thousands
simultaneously, some continuous, some bursty) each writing data 
to the storage system. However, some few clients will be using database-like
queries to read, potentially on the entire data store.

With above workload, is raidz2/raid3 right out, and will I need mirrored
pools? 

How would you lay out the pools for above workload, assuming 24 SATA
drives/chassis (24-48 TBytes raw storage), and 80 GByte SSD each for ZIL/L2ARC 
(is that too little?  Would 160 GByte work better?)

Thanks lots.
 
-- 
Eugen* Leitl <a href="http://leitl.org";>leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to