On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills <mi...@cc.umanitoba.ca> wrote:

> I realize that it is possible to configure more than one LUN per RAID
> group on the storage device, but doesn't ZFS assume that each LUN
> represents an independant disk, and schedule I/O accordingly?  In that
> case, wouldn't ZFS I/O scheduling interfere with I/O scheduling
> already done by the storage device?
>
> Is there any reason not to use one LUN per RAID group?

    My empirical testing confirms both the claims made that ZFS random
read I/O (at the very least) scales linearly with the NUMBER of vdev's
and NOT the number of spindles as well as the recommendation (I
believe from an Oracle White Paper on using ZFS for Oracle DBs) that
if you are using a "hardware" RAID device (with NVRAM write cache),
you should configure one LUN per spindle in the backend raid set.

    In other words, if you build a zpool with one vdev of 10GB and
another with two vdev's each of 5GB (both coming from the same array
and raid set) you get almost exactly twice the random read performance
from the 2x5 zpool vs. the 1x10 zpool.

    Also, using a 2540 disk array setup as a 10 disk RAID6 (with 2 hot
spares), you get substantially better random read performance using 10
LUNs vs. 1 LUN. While inconvenient, this just reflects the scaling of
ZFS aith number of vdevs and not "spindles".

    I suggest performing your own testing to insure you have the
performance to handle your specific application load.

    Now, as to reliability, the hardware RAID array cannot detect
silent corruption of data the way the end to end ZFS checksum can.

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to