James C. McPherson wrote:
James C. McPherson wrote:
Jeff Bonwick wrote:
6420204 root filesystem's delete queue is not running
The workaround for this bug is to issue to following command...
# zfs set readonly=off <pool>/<fs_name>
This will cause the delete queue to start up and should flush your queue.
Thanks for the update. James, please let us know if this solves your problem.
yes, I've tried that several times and it didn't work for me at all.
One thing that worked a *little* bit was to set readonly=on, then
go in with mdb -kw and set the drained flag on root_pool to 0 and
then re-set readonly=off. But that only freed up about 2Gb.

Here's the next installment in the saga. I bfu'd to include Mark's
recent putback, rebooted, re-ran the "set readonly=off" op on the
root pool and root filesystem, and waited. Nothing. Nada. Not a
sausage.

Here's my root filesystem delete head:


> ::fsinfo ! head -2
            VFSP FS              MOUNT
fffffffffbcaa4e0 zfs             /
> fffffffffbcaa4e0::print struct vfs vfs_data |::print struct zfsvfs z_delete_head
{
    z_delete_head.z_mutex = {
        _opaque = [ 0 ]
    }
    z_delete_head.z_cv = {
        _opaque = 0
    }
    z_delete_head.z_quiesce_cv = {
        _opaque = 0
    }
    z_delete_head.z_drained = 0x1
    z_delete_head.z_draining = 0
    z_delete_head.z_thread_target = 0
    z_delete_head.z_thread_count = 0
    z_delete_head.z_znode_count = 0x5ce4
    z_delete_head.z_znodes = {
        list_size = 0xc0
        list_offset = 0x10
        list_head = {
            list_next = 0xffffffff9232ded0
            list_prev = 0xfffffe820d2c16b0
        }
    }
}




I also went in with mdb -kw and set z_drained to 0, then re-set the
readonly flag... still nothing. Pool usage is now up to ~93%, and a
zdb run shows lots of leaked space too:

....[snip bazillions of entries re leakage]....

block traversal size 273838116352 != alloc 274123164672 (leaked 285048320)

        bp count:         5392224
        bp logical:    454964635136      avg:  84374
bp physical: 272756334592 avg: 50583 compression: 1.67 bp allocated: 273838116352 avg: 50783 compression: 1.66
        SPA allocated: 274123164672     used: 91.83%

Blocks  LSIZE   PSIZE   ASIZE     avg    comp   %Total  Type
3 48.0K 8K 24.0K 8K 6.00 0.00 L1 deferred free 5 44.0K 14.5K 37.0K 7.40K 3.03 0.00 L0 deferred free
     8  92.0K   22.5K   61.0K   7.62K    4.09     0.00  deferred free
     1    512     512      1K      1K    1.00     0.00  object directory
     3  1.50K   1.50K   3.00K      1K    1.00     0.00  object array
     1    16K   1.50K   3.00K   3.00K   10.67     0.00  packed nvlist
- - - - - - - packed nvlist size
     1    16K      1K   3.00K   3.00K   16.00     0.00      L1 bplist
     1    16K     16K     32K     32K    1.00     0.00      L0 bplist
     2    32K   17.0K   35.0K   17.5K    1.88     0.00  bplist
     -      -       -       -       -       -        -  bplist header
- - - - - - - SPA space map header 140 2.19M 364K 1.06M 7.79K 6.16 0.00 L1 SPA space map 5.01K 20.1M 15.4M 30.7M 6.13K 1.31 0.01 L0 SPA space map
 5.15K  22.2M   15.7M   31.8M   6.17K    1.42     0.01  SPA space map
     1  28.0K   28.0K   28.0K   28.0K    1.00     0.00  ZIL intent log
     2    32K      2K   6.00K   3.00K   16.00     0.00      L6 DMU dnode
     2    32K      2K   6.00K   3.00K   16.00     0.00      L5 DMU dnode
     2    32K      2K   6.00K   3.00K   16.00     0.00      L4 DMU dnode
     2    32K   2.50K   7.50K   3.75K   12.80     0.00      L3 DMU dnode
    15   240K   50.5K    152K   10.1K    4.75     0.00      L2 DMU dnode
   594  9.28M   3.88M   11.6M   20.1K    2.39     0.00      L1 DMU dnode
 68.7K  1.07G    274M    549M   7.99K    4.00     0.21      L0 DMU dnode
 69.3K  1.08G    278M    561M   8.09K    3.98     0.21  DMU dnode
     3  3.00K   1.50K   4.50K   1.50K    2.00     0.00  DMU objset
     -      -       -       -       -       -        -  DSL directory
3 1.50K 1.50K 3.00K 1K 1.00 0.00 DSL directory child map 2 1K 1K 2K 1K 1.00 0.00 DSL dataset snap map
     5  64.5K   7.50K   15.0K   3.00K    8.60     0.00  DSL props
     -      -       -       -       -       -        -  DSL dataset
     -      -       -       -       -       -        -  ZFS znode
     -      -       -       -       -       -        -  ZFS ACL
2.82K 45.1M 2.93M 5.85M 2.08K 15.41 0.00 L2 ZFS plain file 564K 8.81G 612M 1.19G 2.17K 14.76 0.47 L1 ZFS plain file 4.40M 414G 253G 253G 57.5K 1.63 99.21 L0 ZFS plain file
 4.95M   422G    254G    254G   51.4K    1.67    99.68  ZFS plain file
1 16K 1K 3.00K 3.00K 16.00 0.00 L2 ZFS directory 261 4.08M 280K 839K 3.21K 14.94 0.00 L1 ZFS directory 113K 108M 63.2M 126M 1.11K 1.72 0.05 L0 ZFS directory
  114K   113M   63.5M    127M   1.12K    1.77     0.05  ZFS directory
     2     1K      1K      2K      1K    1.00     0.00  ZFS master node
1 16K 4.50K 13.5K 13.5K 3.56 0.00 L2 ZFS delete queue 66 1.03M 524K 1.54M 23.8K 2.02 0.00 L1 ZFS delete queue 8.21K 131M 53.9M 108M 13.1K 2.44 0.04 L0 ZFS delete queue
 8.28K   132M   54.4M    109M   13.2K    2.43     0.04  ZFS delete queue
     -      -       -       -       -       -        -  zvol object
     -      -       -       -       -       -        -  zvol prop
     -      -       -       -       -       -        -  other uint8[]
     -      -       -       -       -       -        -  other uint64[]
     -      -       -       -       -       -        -  other ZAP
- - - - - - - persistent error log
     2    32K      2K   6.00K   3.00K   16.00     0.00      L6 Total
     2    32K      2K   6.00K   3.00K   16.00     0.00      L5 Total
     2    32K      2K   6.00K   3.00K   16.00     0.00      L4 Total
     2    32K   2.50K   7.50K   3.75K   12.80     0.00      L3 Total
 2.83K  45.4M   2.98M   6.02M   2.12K   15.22     0.00      L2 Total
  565K  8.83G    617M   1.21G   2.19K   14.66     0.47      L1 Total
 4.59M   415G    253G    254G   55.3K    1.64    99.52      L0 Total
 5.14M   424G    254G    255G   49.6K    1.67   100.00  Total




Tabriz, how's your fix coming along?


I am sorry that my response if so delayed, I took Friday off. Anyhow, let me work on this today. I have talked with the ZFS team and we have come to the consensus that if the readonly property is not explicitly given on a remount operation (ie mount -F zfs -o remount,ro pool/dataset, or zfs mount -o remount,ro pool/dataset) that we will remount rw by default. This is more or less what UFS does, except that it does NOT allow explicit readonly remounts (a UFS remount operation will always cause the filesystem to be remounted rw). ZFS is capable of handling readonly remount, so we will continue to allow them.
Will work on this today and let you know as more info becomes available.

Tabriz


James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to