I was wondering if this is a known problem..

I am running stock b118 bits. System has a UFS root
and a single zpool (with multiple nfs, smb, and iscsi
exports)

Powered off my machine last night..  Powered it on this
morning and it hung during boot.  It hung when reading the
zpool disks.. It would read them for a while, stop
reading and hang. I let it sit for over 4 hours... Tried
multiple power cycles, etc.

I was able to power off the disks, and boot the machine..
I exported the zpool, powered on the disks, and rebooted.
The machine booted and I tried to import the zpool.
It did import after about 5 minutes (which seemed a lot
longer than it had been in the past), I saw the following
processes running during this time..

    root   820   368   0 15:14:37 ?           0:00 zfsdle 
/devices/p...@0,0/pci1043,8...@8/d...@1,0:a
    root   818   368   0 15:14:37 ?           0:00 zfsdle 
/devices/p...@0,0/pci1043,8...@7/d...@0,0:a
    root   819   368   0 15:14:37 ?           0:00 zfsdle 
/devices/p...@0,0/pci1043,8...@8/d...@0,0:a

One thing that could be related is that I was running
a scrub when I had powered off the system. The scrub
started up again after I had imported the pool.

Anyone know if this is a known problem?


Thanks,

MRJ



-bash-3.2# zpool status
  pool: tank
 state: ONLINE
 scrub: scrub in progress for 0h12m, 3.25% done, 6h16m to go
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0

errors: No known data errors
-bash-3.2#


-bash-3.2# zdb -C
tank
    version=16
    name='tank'
    state=0
    txg=2866038
    pool_guid=690654980843352264
    hostid=786700041
    hostname='asus-a8n'
    vdev_tree
        type='root'
        id=0
        guid=690654980843352264
        children[0]
                type='raidz'
                id=0
                guid=9034903530721214825
                nparity=1
                metaslab_array=14
                metaslab_shift=33
                ashift=9
                asize=960171343872
                is_log=0
                children[0]
                        type='disk'
                        id=0
                        guid=17813126553843208646
                        path='/dev/dsk/c1t0d0s0'
                        devid='id1,s...@ast3320620as=____________5qf3ysjj/a'
                        phys_path='/p...@0,0/pci1043,8...@8/d...@0,0:a'
                        whole_disk=1
                        DTL=32
                children[1]
                        type='disk'
                        id=1
                        guid=6761028837288241506
                        path='/dev/dsk/c2t0d0s0'
                        devid='id1,s...@ast3320620as=____________5qf3yqxb/a'
                        phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:a'
                        whole_disk=1
                        DTL=31
                children[2]
                        type='disk'
                        id=2
                        guid=15791031942666816527
                        path='/dev/dsk/c1t1d0s0'
                        devid='id1,s...@ast3320620as=____________5qf3ys51/a'
                        phys_path='/p...@0,0/pci1043,8...@8/d...@1,0:a'
                        whole_disk=1
                        DTL=30
-bash-3.2#


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to