Thanks to Constantin Gonzalez and Eric Schrock for answering my initial
report.

- Truncating files to free up some space had worked in the past but not
  this time.
  From my experiment it seems to be possible to fill up a filesystem
  beyond that, for even truncating was met by "No space left on
  device."

- I eventually got out of that squeeze by dissolving one of the smaller
  filesystems in that pool using zfs destroy to get rid of /opt/sfw.

- Afterwards, the system was upgraded from nv30 to nv42a. (Exporting
  the pool and reimporting went smoothly. Great! 

- I'll need the backup anyway though, since I want to give double
  parity a try :)

- I had found the free space guesstimate of zfs to be quite fluctuating
  on that filesystem, and upgrading was no different. Is this
  indicating that the calculations are already a bit more on the
  conservative side, to avoid situations like the one experienced?

  0 3 [EMAIL PROTECTED] pts/9 ~ 50# zfs list
  NAME USED AVAIL REFER MOUNTPOINT
  mirpool1 33.6G 0 137K /mirpool1
  mirpool1/home 12.3G 0 12.3G /export/home
  mirpool1/install 12.9G 0 12.9G /export/install
  mirpool1/local 1.86G 0 1.86G /usr/local
  mirpool1/opt 4.76G 0 4.76G /opt
  mirpool1/sfw 752M 0 752M /usr/sfw

- after dissolving the sfw filesytem, free space was indicated as 
  600-odd MB (I forgot to take a log).

- after exporting the pool, and reimporting unter nv42a, free space is
  shown as:

  0 3 [EMAIL PROTECTED] pts/4 ~ 30# zfs list
  NAME                   USED  AVAIL  REFER  MOUNTPOINT
  mirpool1              32.9G   372M   137K  /mirpool1
  mirpool1/home         12.3G   372M  12.3G  /export/home
  mirpool1/install      12.9G   372M  12.9G  /export/install
  mirpool1/local        1.86G   372M  1.86G  /usr/local
  mirpool1/opt          4.76G   372M  4.76G  /opt

And, finally:
- Under nv42a, zdb goes a little bit further before throwing its core
  at the sight of that pool:

  Traversing all blocks to verify checksums and verify nothing leaked ...
  Assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 
(0x5 == 0x0), file .
./../../uts/common/fs/zfs/space_map.c, line 327
  Abort (core dumped)

Core and description at http://opensolaris.in-berlin.de/core/ (currently 
uploading)
Should I file a new bug, or is there a way to append the new information to to 
bug 6437157 (filed against nv30) ?

Next step was to upgrade the pool. This time zdb gets even further. 
Right now it's still happily running cumulating lines and lines of:
 zdb_blkptr_cb: Got error 50 reading <114, 0, 1, 5>  -- skipping
Looks like it may take a while to chew on that.

--Tatjana
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to