I used a usb stick, and the first time I used it, I used something similar to

zpool create black c5t0d0p0 # ie with the "p0" pseudo partition
and used it happily for some while.

Some weeks later, I wanted to use the stick again, starting afresh, but this 
time used
zpool create black c5t0d0 # ie *without* the "p0" pseudo partition

and later, when attempting to import it, I got offered *both* the black pools, 
the original (ie old and overwritten) one from c5t0d0p0, and the newer good one 
from c5t0d0

Should zfs protect against this "user error"?  (I'm not even sure why it 
occurred, since I had assumed that both pseudo devices would map to a similar 
region)



# uname -a
SunOS mouse 5.11 snv_77 i86pc i386 i86pc
# 
# rmformat
Looking for devices...
     1. Logical Node: /dev/rdsk/c5t0d0p0
        Physical Node: /[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED],1/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
        Connected Device: SanDisk  U3 Cruzer Micro  3.27
        Device Type: Removable
        Bus: USB
        Size: 3.9 GB
        Label: <Unknown>
        Access permissions: Medium is not write protected.
# zpool import
  pool: black
    id: 13810954658225353291
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
config:

        black       ONLINE
          c5t0d0    ONLINE

  pool: black
    id: 4667414672969078773
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        black       ONLINE
          c5t0d0p0  ONLINE

### first the newer (ie) good one is imported and used ok
# zpool import 13810954658225353291
# ls /black
November
# zpool status black
  pool: black
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        black       ONLINE       0     0     0
          c5t0d0    ONLINE       0     0     0

errors: No known data errors
# 
# find black -depth -print | cpio -pmd /var/tmp
788016 blocks

# zpool scrub black
# zpool status black
  pool: black
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: scrub completed with 0 errors on Thu Nov 29 12:51:08 2007
config:

        NAME        STATE     READ WRITE CKSUM
        black       ONLINE       0     0     0
          c5t0d0    ONLINE       0     0     0

errors: No known data errors
# 

# 
## ...and now the older one that's most likely corrupt is used...
# zpool export black 

# zpool import -f 4667414672969078773
# ls /black
October
# 
# zpool scrub black

...some time passes...

# zpool status black
  pool: black
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 7224 errors on Thu Nov 29 12:56:47 2007
config:

        NAME        STATE     READ WRITE CKSUM
        black       DEGRADED     0     0 26.6K
          c5t0d0p0  DEGRADED     0     0 26.6K  too many errors

errors: 7073 data errors, use '-v' for a list
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to