Okay, so after some test with dedup on snv_134.  I decided we can not to use 
dedup feature for the time being.

While unable to destroy a dedupped file system.  I decided to migrate the file 
system to another pool then destroy the pool. (see below)

http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=75
http://opensolaris.org/jive/thread.jspa?threadID=128620&tstart=60


Now here is my problem.  
I did a snapshot of the file system I want to migrate.
I did a send and receive of the file system 

zfs send tank/export/projects/project1...@today | zfs receive -d mpool

but the file system end up smaller than the original file system without the 
dedup turn on.  How is this possible?  Can someone explain. I am not able to 
trust the data now until I can verify the data are identical.

SunOS filearch1 5.11 snv_134 i86pc i386 i86xpv Solaris
r...@filearch1:/var/adm# zpool status
  pool: mpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mpool       ONLINE       0     0     0
          c7t7d0    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c7t0d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0
            c7t6d0  ONLINE       0     0     0

errors: No known data errors

r...@filearch1:/var/adm# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
mpool                                  407G   278G    22K  /mpool
mpool/export                           407G   278G    22K  /mpool/export
mpool/export/projects                  407G   278G    23K  
/mpool/export/projects
mpool/export/projects/bali_nobackup    407G   278G   407G  
/mpool/export/projects/project1_nb
< ...>
tank                                   520G  4.11T  34.9K  /tank
tank/export/projects                   515G  4.11T  41.5K  /export/projects
tank/export/projects/bali_nobackup     427G  4.11T   424G  
/export/projects/project1_nb

r...@filearch1:/var/adm# zfs get compressratio
NAME                                       PROPERTY       VALUE  SOURCE
mpool                                      compressratio  2.43x  -
mpool/export                               compressratio  2.43x  -
mpool/export/projects                      compressratio  2.43x  -
mpool/export/projects/project1_nb        compressratio  2.43x  -
mpool/export/projects/project1...@today  compressratio  2.43x  -
tank                                       compressratio  2.34x  -
tank/export                                compressratio  2.34x  -
tank/export/projects                       compressratio  2.34x  -
tank/export/projects/project1_nb         compressratio  2.44x  -
tank/export/projects/project1...@today   compressratio  2.44x  -
tank/export/projects/project1_nb_2       compressratio  1.00x  -
tank/export/projects/project1_nb_3       compressratio  1.90x  -

r...@filearch1:/var/adm# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
mpool   696G   407G   289G    58%  1.00x  ONLINE  -
rpool  19.9G  9.50G  10.4G    47%  1.00x  ONLINE  -
tank   5.44T   403G  5.04T     7%  2.53x  ONLINE  -
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to