Cindy,
The other two pools are 2 disk mirrors (rpool and another).
Ben
Cindy Swearingen wrote:
Hi Ben,
Any other details about this pool, like how it might be different from
the other two pools on this system, might be helpful...
I'm going to try to reproduce this problem.
We'll be in touch.
Thanks,
Cindy
On 06/17/10 07:02, Ben Miller wrote:
I upgraded a server today that has been running SXCE b111 to the
OpenSolaris preview b134. It has three pools and two are fine, but
one comes up with no space available in the pool (SCSI jbod of 300GB
disks). The zpool version is at 14.
I tried exporting the pool and re-importing and I get several errors
like this both exporting and importing:
# zpool export pool1
WARNING: metaslab_free_dva(): bad DVA 0:645838978048
WARNING: metaslab_free_dva(): bad DVA 0:645843271168
...
I tried removing the zpool.cache file, rebooting, importing and
receive no warnings, but still reporting the wrong avail and size.
# zfs list pool1
NAME USED AVAIL REFER MOUNTPOINT
pool1 396G 0 3.22M /export/home
# zpool list pool1
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
pool1 476G 341G 135G 71% 1.00x ONLINE -
# zpool status pool1
pool: pool1
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool
can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c1t8d0 ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
c1t12d0 ONLINE 0 0 0
c1t13d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
I try exporting and again get the metaslab_free_dva() warnings.
Imported again with no warnings, but same numbers as above. If I try
to remove files or truncate files I receive no free space errors.
I reverted back to b111 and here is what the pool really looks like.
# zfs list pool1
NAME USED AVAIL REFER MOUNTPOINT
pool1 396G 970G 3.22M /export/home
# zpool list pool1
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool1 1.91T 557G 1.36T 28% ONLINE -
# zpool status pool1
pool: pool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c1t8d0 ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
c1t12d0 ONLINE 0 0 0
c1t13d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
Also, the disks were replaced one at a time last year from 73GB to
300GB to increase the size of the pool. Any idea why the pool is
showing up as the wrong size in b134 and have anything else to try? I
don't want to upgrade the pool version yet and then not be able to
revert back...
thanks,
Ben
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss