Neal Pollack wrote:
Hi:

What is the most common practice for allocating (choosing) the two disks used for
the boot drives, in a zfs root install, for the mirrored rpool?

The docs for thumper, and many blogs, always point at cfgadm slots 0 and 1, which are sata3/0 and sata/3/4, which most often map to c5t0d0 and c5t4d0.
But those are on the same controller (yes, I've read all that before).
And these seem to be the ones that BIOS agrees to boot from.

However, the doc below, in section;
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#ZFS_Configuration_Example_.28x4500_with_raidz2.29

mentions using two boot disks for the zfs root on a different controller;
zpool create mpool mirror c5t0d0s0 c4t0d0s0

I'll assume that they meant "rpool" instead of mpool. I had thought that BIOS will only agree to boot from the slot 0 and slot 1 disks which are on the same
controller.
Does anyone know which doc is correct, and what two disk devices
are typically being used for the zfs root these days?

It depends on your BIOS.  AFAIK, there is no way for the BIOS to
tell the installer which disks are valid boot disks.  For OBP (SPARC)
systems, you can have the installer know which disks are available
for booting.


If I stick with the x4500 docs and use c5t0d0 and c5t4d0, they both
can be booted from bios, but it makes doing remaining raidz2 data pool
a little trickier. 7 sets of 6-disk raidz2, can't get all vdevs on different
controller number.

But if I use the example from SolarisInternals.com guide above, with
both of the zfs root pool disks on different controllers, it makes it easier to allocate remaining vdevs for the "7 sets of 6-disk raidz2", but I can't see
how BIOS could select both of those boot devices?

Do you think it matters for the availability of data which controller is used? The answer, for availability, in a system like x4500, is to use only one controller,
but you have 6, because they don't make a x48 SATA controller.
In other words, don't worry about controllers on a machine like the x4500
when you are considering data availability.  Do worry about the disks, use
double parity if you can, single parity otherwise.
-- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to