The system that I built had 5 x 72GB SCA SCSI drives.  Just to keep my
own sanity, I decided that I'd configure the fdisk partitioning identically across all of the drives. So that they all have a 1GB slice and and a 71GB
slice.

The drives all have identical capacity, so the second 71GB slice ends up
the same on all of the drives.  I actually end up using glabel to create
a named unit of storage, so that I don't have to worry about getting
the drives inserted into the right holes..

I figured that 1GB wasn't too far off for both swap partitions (3 of 'em)
plus a pair mirrored to boot from.

I haven't really addressed directly swapping another drive of a slightly
different size, though I've spares and I could always put a larger drive
in and create a slice at the right size.

It looks like this, with all of the slices explicitly named with glabel:

r...@droid[41] # glabel status
            Name  Status  Components
     label/boot0     N/A  da0s1
    label/zpool0     N/A  da0s2
     label/boot1     N/A  da1s1
    label/zpool1     N/A  da1s2
     label/swap2     N/A  da2s1
    label/zpool2     N/A  da2s2
     label/swap3     N/A  da3s1
    label/zpool3     N/A  da3s2
     label/swap4     N/A  da4s1
    label/zpool4     N/A  da4s2

And the ZFS pool references the labeled slices:

r...@droid[42] # zpool status
  pool: z
 state: ONLINE
 scrub: none requested
config:

        NAME              STATE     READ WRITE CKSUM
        z                 ONLINE       0     0     0
          raidz2          ONLINE       0     0     0
            label/zpool0  ONLINE       0     0     0
            label/zpool1  ONLINE       0     0     0
            label/zpool2  ONLINE       0     0     0
            label/zpool3  ONLINE       0     0     0
            label/zpool4  ONLINE       0     0     0

errors: No known data errors

And swap on the other ones:

r...@droid[43] # swapinfo
Device          1024-blocks     Used    Avail Capacity
/dev/label/swap4     1044192        0  1044192     0%
/dev/label/swap3     1044192        0  1044192     0%
/dev/label/swap2     1044192        0  1044192     0%
Total               3132576        0  3132576     0%

This is the mirrored partition that the system actually boots from. This maps physically to da0s1 and da1s1. The normal boot0 and boot1/boot2 and
loader operate typically on da0s1a which is really /dev/mirror/boota:

r...@droid[45] # gmirror status
       Name    Status  Components
mirror/boot  COMPLETE  label/boot0
                       label/boot1

r...@droid[47] # df -t ufs
Filesystem          1024-blocks      Used    Avail Capacity  Mounted on
/dev/mirror/boota       1008582    680708   247188    73%    /bootdir

The UFS partition eventually ends up getting mounted on /bootdir:

r...@droid[51] # cat /etc/fstab
# Device Mountpoint FStype Options Dump Pass# zfs:z/root / zfs rw 0 0 /dev/mirror/boota /bootdir ufs rw,noatime 1 1 /dev/label/swap2 none swap sw 0 0 /dev/label/swap3 none swap sw 0 0 /dev/label/swap4 none swap sw 0 0 /dev/acd0 /cdrom cd9660 ro,noauto 0 0

But when /boot/loader on the UFS partition reads what it thinks is / etc/fstab, which eventually ends up in /bootdir/etc/fstab, the root file system that's mounted
is the ZFS filesystem at z/root:


r...@droid[52] # head /bootdir/etc/fstab
# Device Mountpoint FStype Options Dump Pass# z/root / zfs rw 0 0

And /boot on the ZFS root is symlinked into the UFS filesystem, so it gets updated
when a make installworld happens:

r...@droid[53] # ls -l /boot
lrwxr-xr-x  1 root  wheel  12 May  3 23:00 /boot@ -> bootdir/boot

louie



On May 30, 2009, at 3:15 PM, Dan Naumov wrote:

Is the idea behind leaving 1GB unused on each disk to work around the
problem of potentially being unable to replace a failed device in a
ZFS pool because a 1TB replacement you bought actually has a lower
sector count than your previous 1TB drive (since the replacement
device has to be either of exact same size or bigger than the old
device)?

- Dan Naumov


On Sat, May 30, 2009 at 10:06 PM, Louis Mamakos <lo...@transsys.com> wrote:
I built a system recently with 5 drives and ZFS. I'm not booting off a ZFS root, though it does mount a ZFS file system once the system has booted from a UFS file system. Rather than dedicate drives, I simply partitioned each
of the drives into a 1G partition


_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to