On Tue, 2010-06-15 at 18:33 +0200, Arve Paalsrud wrote:

> 
> What about the ZIL bandwidth in this case? I mean, could I stripe across 
> multiple devices to be able to handle higher throughput? Otherwise I would 
> still be limited to the performance of the unit itself (155 MB/s).
>  

I think so.  Btw, I've gotten better performance than that with my
driver (not sure about the production driver).  I seem to recall about
220 MB/sec.  (I was basically driving the PCIe x1 bus to its limit.)
This was with large transfers (sized at 64k IIRC.)  Shrinking the job
size down, I could get up to 150K IOPS with 512 byte jobs.  (This high
IOP rate is unrealistic for ZFS -- for ZFS the bus bandwidth limitation
comes into play long before you start hitting IOPS limitations.)

One issue of course is that each of these units occupies a PCIe x1 slot.

On another note, if you're dataset and usage requirements don't require
strict I/O flush/sync guarantees, you could probably get away without
any ZIL at all, and just use lots of RAM to get really good performance.
(You'd then disable the zil on filesystems that didn't have this need.
This is a very new feature in OpenSolaris.)  Of course, you don't want
to do this for data sets where loss of the data would be tragic.  (But
its ideal for situations such as filesystems used for compiling, etc. --
where the data being written can be easily regenerated in the event of a
failure.)

        -- Garrett


> > >
> > > DDT requirements for dedupe on 16k blocks should be about 640GB when
> > main pool are full (capacity).
> > 
> > Dedup is not always a win, I think.  I'd look hard at your data and
> > usage to determine whether to use it.
> > 
> >     -- Garrett
> 
> -Arve
> 
> 


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to