> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Haudy Kazemi
> 
> Your remaining space can be configured as slices.  These slices can be
> added directly to a second pool without any redundancy.  If any drive
> fails, that whole non-redundant pool will be lost.  

For clarification:  In the above description, you're creating a stripe.
zpool create secondpool deviceA deviceB deviceC
(350G + 850G + 1350G)

According to the strictest definition, that's not technically a stripe.  But
only because the ZFS implementation supersedes a simple stripe, making it
obsolete.  So this is what we'll commonly call a stripe in ZFS.

When we say "stripe" in ZFS, we really mean:  Configure the disk controller
(if you have a raid controller card) not to do any hardware raid.  The disk
controller will report to the OS, that it has just a bunch of disks (jbod).
And then the OS will have the option of doing software raid (striping,
mirroring, raidz, etc).  The OS doing software raid, in most ZFS cases, is
smarter than the hardware doing hardware raid.  Because the OS has intimate
knowledge of the filesystem and the blocks on disk, while the hardware only
has knowledge of blocks on disk.  Therefore, the OS is able to perform
optimizations that would otherwise be impossible in hardware.

But I digress.  In the OS, if you simply add devices to a pool in the manner
described above (zpool create ...) then you're implementing software raid,
and it's no longer what you would normally call JBOD.  In reality, this
configuration shares some of the characteristics of a concatentation set and
a stripe set, but again, ZFS implementation makes both of those obsolete, so
it's not strictly either one.  We call it a stripe.


> In a JBOD arrangement, however, some files might still be
> complete, but I don't believe ZFS supports JBOD-style non-redundant
> pools.  For most people that is not a big deal, as part of the point of
> ZFS is to focus on data integrity and performance, neither of which is
> offered by JBOD (as it is still ruined by single device failures, it is
> just that it is easier to carve files out of a JBOD than a broken
> RAID).

For clarification:  I believe you're using the term JBOD, when you really
mean stripe.  Again, it's conceivable and understandable for you to call
this jbod, because in hardware it is jbod, but in ZFS that's not what we
conventionally go by.  In ZFS, we'll call this a stripe.

If we were going to call something JBOD in ZFS land ... There would be a
separate pool on each separate device.  That we would call jbod.



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to