2012-10-14 1:56, Ian Collins пишет:
On 10/13/12 22:13, Jim Klimov wrote:
2012-10-13 0:41, Ian Collins пишет:
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
#1  It seems common, at least to me, that I'll build a server with
let's say, 12 disk slots, and we'll be using 2T disks or something
like that.  The OS itself only takes like 30G which means if I don't
partition, I'm wasting 1.99T on each of the first two disks.  As a
result, when installing the OS, I always partition rpool down to ~80G
or 100G, and I will always add the second partitions of the first
disks to the main data pool.
How do you provision a spare in that situation?
Technically - you can layout the spare disks similarly and attach
the partitions or slices as spares for pools.

I probably didn't didn't make my self clear, so I'll try again!

Assuming the intention is to get the most storage from your drives.  If
you add the remainder of the space on the drives you have partitioned
for the root pool to the main pool giving a mix of device sizes in the
pool, how do you provision a spare?

Well, as long as the replacement device (drive, partition, slice)
dedicated to a pool is at least as big as any of its devices,
it can kick in as a hot- or cold-spare. So I guess you can have
a pool with 2*1.90+8*2.0Tb devices (likely mirrored per-couple,
or a wilder mix of mirror+raidzN's), an L2ARC SSD and a 2Tb spare.
If an rpool disk dies, the 2.0Tb space on the spare should be
enough to replace the 1.90Tb component of the data pool.
You might have harder time replacing the rpool part though.

Alternately, roughly following my approach #2, you can layout
all of your disks in the same manner, like 0.1+1.90Tb. Two or
three of these can form an rpool mirror, a couple more can be
swap, and a majority can form a raid10 or a raidzN on the known
faster sectors of the drives. You get a predictably faster pool
for scratch, incoming, database logs - whatever (as long as the
disks are not heavily utilized all the time, and you *can* have
the performance boost on a smaller pool with smaller mechanical
seek travels and faster cylinders).

In particular, the hotspare would be laid out the same and can
replace components of both types of pools.

Most of the system I have built up this year are 2U boxes with 8 to 12
(2TB) drives.  I expect these are very common at the moment.  I use your
third option but I tend to just create a big rpool mirror and add a
scratch filesystem  rather than partitioning the drives.

Consider me old-school, but I believe that writes to a filesystem
(or to a pool in case of ZFS) are a source of risk for corruption
during power glitches and such. Also they are a cause of higher
data fragmentation. When I have a chance, I prefer to keep apples
with apples - a relatively static rpool which gets written into
during package installations and updates, config changes and so
on; and one or more data pools for more active data lifecycles.
The rpool is too critical a component (regarding loss of service
during outages, such as inability to boot and fix problems via
remote access) to add risks to its corruption, especially now
that we don't have failsafe boot options. As long as the system
boots and you have ssh to fix the data pools, you've kicked up
your SLAs a bit higher and reduced a few worries ;)

HTH, my 2c,
//Jim Klimov

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to