I have an Oracle (nee Sun) X4-2 server with identical 300GB SAS
drives.  I did an MBR ZFS install from FreeBSD 10.1-RELEASE CD
and have it updated to p6:

  $ uname -a
  FreeBSD foo 10.1-RELEASE-p6 FreeBSD 10.1-RELEASE-p6 #0:
    Tue Feb 24 19:00:21 UTC 2015
    r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC
    amd64

  $ freebsd-version
  10.1-RELEASE-p6

The current ZFS setup is:

  $ zdb | grep ashift
              ashift: 12
              ashift: 12

  $ zpool status
    pool: bootpool
   state: ONLINE
    scan: resilvered 486M in 0h0m with 0 errors on Thu Mar 26 09:16:45  2015
  config:

          NAME                                                            STATE 
    READ WRITE CKSUM
          bootpool                                                        
ONLINE       0     0     0
            diskid/DISK-001442CBEEKF%20%20%20%20%20%20%20%20KFHBEEKFs1a   
ONLINE       0     0     0


    pool: zroot
   state: ONLINE
    scan: resilvered 200K in 0h0m with 0 errors on Wed Mar 25 10:51:36  2015
  config:

          NAME                                                            STATE 
    READ WRITE CKSUM
          zroot                                                           
ONLINE       0     0     0
            diskid/DISK-001442CBEEKF%20%20%20%20%20%20%20%20KFHBEEKFs1d   
ONLINE       0     0     0

  $ gpart show diskid/DISK-001442CBEEKF%20%20%20%20%20%20%20%20KFHBEEKF
  =>       63  585937437   
diskid/DISK-001442CBEEKF%20%20%20%20%20%20%20%20KFHBEEKF  MBR  (279G)
           63  585937422                                                        
  1  freebsd  [active]  (279G)
    585937485         15                                                        
     - free -  (7.5K)

  $ gpart show diskid/DISK-001442CBEEKF%20%20%20%20%20%20%20%20KFHBEEKFs1
  =>        0  585937422   
diskid/DISK-001442CBEEKF%20%20%20%20%20%20%20%20KFHBEEKFs1  BSD  (279G)
            0    4194304                                                        
    1  freebsd-zfs  (2.0G)
      4194304    8388608                                                        
    2  freebsd-swap  (4.0G)
     12582912  573354510                                                        
    4  freebsd-zfs  (273G)

  [Why are the disk ids blank %20 filled?]

Now I want to create another (sorta) matching setup, but this time want
to use labels and 4G (instead of 2G) for bootpool.

  # gpart create -s MBR da1
  # gpart add -t freebsd da1
  # gpart create -s BSD da1s1
  # gpart add -s 4G -t freebsd-zfs da1s1
  # gpart add -s 4G -t freebsd-swap da1s1
  # gpart add -t freebsd-zfs da1s1
  # gpart show da1
  =>       63  585937437  da1  MBR  (279G)
           63  585937422    1  freebsd  (279G)
    585937485         15       - free -  (7.5K)
  # gpart show da1s1
  =>        0  585937422  da1s1  BSD  (279G)
            0    8388608      1  freebsd-zfs  (4.0G)
      8388608    8388608      2  freebsd-swap  (4.0G)
     16777216  569160206      4  freebsd-zfs  (271G)

Except for da1s1a being 4G instead of 2G, everything matches the
ZFS setup above.  Make the labels.

  # glabel label boot0 da1s1a
  # glabel label swap0 da1s1b
  # glabel label root0 da1s1d

Create the ZFS bootpool.

  # zpool create -o cachefile=/tmp/newpool.cache bootpoolNew label/boot0
  # zdb -U /tmp/newpool.cache | grep ashift
              ashift: 9

The geometry matches, but ashift is 9 not 12.

If I try to use 4K, the disk geometry doesn't match the original and
ashift is still 9 instead of 12.

  # gpart create -s MBR da1
  # gpart add -a 4k -t freebsd da1
  # gpart create -s BSD da1s1
  # gpart add -a 4k -s 4G -t freebsd-zfs da1s1
  # gpart add -a 4k -s 4G -t freebsd-swap da1s1
  # gpart add -a 4k -t freebsd-zfs da1s1

  # gpart show da1
  =>       63  585937437  da1  MBR  (279G)
           63         63       - free -  (32K)
          126  585937359    1  freebsd  [active]  (279G)
    585937485         15       - free -  (7.5K)

  # gpart show da1s1
  =>        0  585937359  da1s1  BSD  (279G)
            0          2         - free -  (1.0K)
            2    8388608      1  freebsd-zfs  (4.0G)
      8388610    8388608      2  freebsd-swap  (4.0G)
     16777218  569160136      4  freebsd-zfs  (271G)
    585937354          5         - free -  (2.5K)

  # zpool create -o cachefile=/tmp/newpool.cache bootpoolNew label/boot0
  # zdb -U /tmp/newpool.cache | grep ashift
              ashift: 9

What gives?  How do I get it to use 4k?

--
DE
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to