> > Yep; it's an argument about the Titanic deck chair
> > arrangement.
> 
> Excuse me, but the same argument can be carried over
> to the separate pools for root FS and data scenario:
> most systems that will be running Solaris (as it
> picks up volume) are single systems with internal
> storage, usually two disks. What is the point of
> having separate pools if they reside on the same
> physical devices? If one or more of those devices
> experiences catastrophic failue, all pools,
> regardless of their separation, will be affected.
> 
> The whole point of a ZFS pool as Jeff Bonwick
> imagined it was to use ALL the storage capacity in
> your system optimally, without having to resort to
> discrete and often inaccurate sizing.
> 
> To advocate separate pools for "root" and "data", is
> to defeat and go against the very idea that Jeff
> Bonwick was trying to solve and promote in the first
> place.

That only holds true on a two or less disk system, where
the most redundancy you can have is simple mirroring
(give or take ditto blocks for zfs metadata).  For anything
larger than that, different arrangements may be preferable.
Even for a two disk system, unless it's a moldy oldie, the disks
may be large enough for two separate (mirrored) pools to be
desirable; one large enough to hold the OS and co-packaged
software (and maybe even with room for a second copy for
LiveUpgrade), and one for non-OS data.  However much room the
OS uses, you'd want to reserve that much plus some margin for growth
for an update (doubled if you'd use LiveUpgrade).  For the rest, you
might be willing to risk non-redundant storage if capacity were more
important than avoiding data loss; or you might want it as a separate
pool consisting of a mirrored vdev concatenated with a mirrored or
raidz vdev on external disks (SATA makes externals with good performance
cheaper and less ugly, I would think (I still think FW400 or USB2.0 are
usually tolerable but not always fast enough; FW800 might be a different
story if it were an option, but a SATA card and JBOD might not cost any
more than an FW800 card and JBOD anyway)).

Now, with disks below say one (or two) hundred GB, or less than two
disks, I'd tend to buy the notion that one slice for OS _and_ data makes
more sense.

As yet another thought, even some big servers only have two internal
disks; the rest is on an array or SAN, and in the latter case, may
occasionally go offline when the internal drives are doing fine.

So maybe there are at least three cases:

1 disk or two small disks

2 decent sized internal disks

3 or more disks

each of which should ideally have different defaults.  The first sounds
like a laptop or small desktop to me; the 2nd, like a desktop or server with
only two internal drives; and the 3rd, like everything else (which way
to describe them is a matter of which would be better understood).

Indeed, it seems to me that there could be more than three possible
"profiles" for typical layouts, depending on whether one wants LiveUpgrade,
redundant OS, redundant user data, etc.

It also seems to me that given the 3 categories I gave first, plus typical usage
schemes, it ought to be possible for the installer to recognize the available
number and size of disks, ask a few questions about how one wants to use
the space, and present a smarter default generally suitable for that purpose.

That seems a bit more useful than having a single-disk layout consisting of
either 2 (zpool + swap) or 3 (os zpool + user data zpool + swap) slices.

I don't know that I'd carry it too far, though.  If someone wants to do
something nontrivial, then almost by definition they're going to have to
_think_ a little...
 
 
This message posted from opensolaris.org

Reply via email to