Title: test2
Hi there,
I need to migrate zfs data from a huge vdev to a smaller one, as only 5% of the first vdev is used.
Many zfs items were created on the source vdev,  organized as tree, subtrees, etc.

zpool attach and detach won't help here as the target device is smaller than the source one.
So, with Nevada build 90, I've tried the following and would like to get some feedback from you all (this a 'principle' test).

Here is my 'very tiny' config :
bash-3.2# zpool status master
  pool: master
 state: ONLINE
 scrub: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    master      ONLINE       0     0     0
      c0d0s5    ONLINE       0     0     0
      c0d0s6    ONLINE       0     0     0

bash-3.2# zpool status target
  pool: target
 state: ONLINE
 scrub: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    target      ONLINE       0     0     0
      c0d0s3    ONLINE       0     0     0
      c0d0s4    ONLINE       0     0     0

Zfs items were created with quotas and mountpoints :
bash-3.2# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
master                   16.6M   177M    18K  /master
master/vol1              16.4M  83.6M  1.40M  /testvol1
master/vol1/undervol1-1  5.02M  15.0M  5.02M  /testvol1/undervol1-1
master/vol1/undervol1-2  10.0M  20.0M  10.0M  /testvol1/undervol1-2

target                    141K   194M    18K  /target


First, I did create a recursive snap of all zfs items owned by the ''master'' pool :
bash-3.2# zfs snapshot -r [EMAIL PROTECTED]
bash-3.2# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
master                         16.6M   177M    18K  /master
[EMAIL PROTECTED]                       0      -    18K  -
master/vol1                    16.4M  83.6M  1.40M  /testvol1
master/[EMAIL PROTECTED]                  0      -  1.40M  -
master/vol1/undervol1-1        5.02M  15.0M  5.02M  /testvol1/undervol1-1
master/vol1/[EMAIL PROTECTED]      0      -  5.02M  -
master/vol1/undervol1-2        10.0M  20.0M  10.0M  /testvol1/undervol1-2
master/vol1/[EMAIL PROTECTED]      0      -  10.0M  -

target                          141K   194M    18K  /target

I did notice that while creating a clone from my snapshot, I loose my under-zfs items :
bash-3.2# zfs clone master/[EMAIL PROTECTED]  master/clone1
bash-3.2# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
master                         16.8M   177M    18K  /master
[EMAIL PROTECTED]                     17K      -    18K  -
master/clone1                      0   177M  1.40M  /master/clone1

No recursive stuff here... So, I'll need to perform this, one by one (via a script aware of all zfs items properties).

To migrate my data, I did choose to send my [EMAIL PROTECTED] to ''target'' pool, on the very same host.
As the target pool will get all the properties from the source (receive with -d and -F options), the following is done first :
bash-3.2# umount /testvol1/undervol1-1
bash-3.2# umount /testvol1/undervol1-2
bash-3.2# umount /testvol1

Let's start the "migration" :
bash-3.2# zfs send -R [EMAIL PROTECTED]| zfs receive -dnv -F target
would receive full stream of [EMAIL PROTECTED] into [EMAIL PROTECTED]
would receive full stream of master/[EMAIL PROTECTED] into target/[EMAIL PROTECTED]
would receive full stream of master/vol1/[EMAIL PROTECTED] into target/vol1/[EMAIL PROTECTED]
would receive full stream of master/vol1/[EMAIL PROTECTED] into target/vol1/[EMAIL PROTECTED]


Then, for real :
bash-3.2# zfs send -R [EMAIL PROTECTED]| zfs receive -d -F target
bash-3.2# zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
master                         16.8M   177M    18K  /master
[EMAIL PROTECTED]                     17K      -    18K  -
master/vol1                    16.5M  83.5M  1.40M  /testvol1
master/[EMAIL PROTECTED]                16K      -  1.40M  -
master/vol1/undervol1-1        5.04M  15.0M  5.02M  /testvol1/undervol1-1
master/vol1/[EMAIL PROTECTED]    16K      -  5.02M  -
master/vol1/undervol1-2        10.0M  20.0M  10.0M  /testvol1/undervol1-2
master/vol1/[EMAIL PROTECTED]    16K      -  10.0M  -

target                         16.6M   177M    18K  /target
[EMAIL PROTECTED]                       0      -    18K  -
target/vol1                    16.4M  83.6M  1.40M  /testvol1
target/[EMAIL PROTECTED]                  0      -  1.40M  -
target/vol1/undervol1-1        5.02M  15.0M  5.02M  /testvol1/undervol1-1
target/vol1/[EMAIL PROTECTED]      0      -  5.02M  -
target/vol1/undervol1-2        10.0M  20.0M  10.0M  /testvol1/undervol1-2
target/vol1/[EMAIL PROTECTED]      0      -  10.0M  -


At this point, I can't see, from zfs cli, which zfs items are mounted or not.
Anyway, a simple df -h helps :
bash-3.2# df -h
(...)
master                 194M    18K   177M     1%    /master
target                 194M    18K   177M     1%    /target
target/vol1            100M   1.4M    84M     2%    /testvol1
target/vol1/undervol1-2
                        30M    10M    20M    34%    /testvol1/undervol1-2
target/vol1/undervol1-1
                        20M   5.0M    15M    26%    /testvol1/undervol1-1

Question #1: how to know which zfs volume is mounted (from zfs side) via the zfs list command, in this situation ?
                        Is there a FLAG for this I should see ?

Question #2 : As I'm here working on a single host, I wonder if there is a way to avoid auto-mount of target zfs items.
                         An alternative which will make me happy is to be able to change my target rootdir on tly...

The idea is to mount all these target stuff when possible...(when customer production could be stopped)

By the way, I get all my data online, actually hosted by my target pool :
bash-3.2# ls -lR /testvol1
/testvol1:
total 2829
-rwxr-xr-x   1 root     root     1337110 Jun 11 13:01 TrueCopy-rd10812.pdf
drwxr-xr-x   2 root     root           3 Jun 11 14:55 undervol1-1
drwxr-xr-x   2 root     root           3 Jun 11 14:55 undervol1-2

/testvol1/undervol1-1:
total 10245
-rw------T   1 root     root     5242880 Jun 11 14:55 5m

/testvol1/undervol1-2:
total 20487
-rw------T   1 root     root     10485760 Jun 11 14:55 10m


Many thanx for your hints and feedback
C.

--

 

 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to