I recently noticed that importing larger pools that are occupied by large 
amounts of data can do zpool import for several hours while zpool iostat only 
showing some random reads now and then and iostat -xen showing quite busy disk 
usage, It's almost it goes thru every bit in pool before it goes thru.

Somebody said that zpool import got faster on snv118, but I don't have real 
information on that yet.

Yours
Markus Kovero

-----Original Message-----
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Victor Latushkin
Sent: 29. heinäkuuta 2009 14:05
To: Pavel Kovalenko
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zpool import hungs up forever...

On 29.07.09 14:42, Pavel Kovalenko wrote:
> fortunately, after several hours terminal went back -->
> # zdb -e data1
> Uberblock
> 
>         magic = 0000000000bab10c
>         version = 6
>         txg = 2682808
>         guid_sum = 14250651627001887594
>         timestamp = 1247866318 UTC = Sat Jul 18 01:31:58 2009
> 
> Dataset mos [META], ID 0, cr_txg 4, 27.1M, 3050 objects
> Dataset data1 [ZPL], ID 5, cr_txg 4, 5.74T, 52987 objects
> 
>                             capacity   operations   bandwidth  ---- errors 
> ----
> description                used avail  read write  read write  read write 
> cksum
> data1                     5.74T 6.99T   772     0 96.0M     0     0     0    
> 91
>   /dev/dsk/c14t0d0        5.74T 6.99T   772     0 96.0M     0     0     0   
> 223
> #

So we know that there are some checksum errors there but at least zdb 
was able to open pool in read-only mode.

> i've tried to run zdb -e -t 2682807 data1
> and 
> #echo "0t::pid2proc|::walk thread|::findstack -v" | mdb -k

This is wrong - you need to put PID of the 'zpool import data1' process 
right after '0t'.

> and 
> #fmdump -eV
> shows checksum errors, such as 
> Jul 28 2009 11:17:35.386268381 ereport.fs.zfs.checksum
> nvlist version: 0
>         class = ereport.fs.zfs.checksum
>         ena = 0x1baa23c52ce01c01
>         detector = (embedded nvlist)
>         nvlist version: 0
>                 version = 0x0
>                 scheme = zfs
>                 pool = 0x578154df5f3260c0
>                 vdev = 0x6e4327476e17daaa
>         (end detector)
> 
>         pool = data1
>         pool_guid = 0x578154df5f3260c0
>         pool_context = 2
>         pool_failmode = wait
>         vdev_guid = 0x6e4327476e17daaa
>         vdev_type = disk
>         vdev_path = /dev/dsk/c14t0d0p0
>         vdev_devid = id1,s...@n2661000612646364/q
>         parent_guid = 0x578154df5f3260c0
>         parent_type = root
>         zio_err = 50
>         zio_offset = 0x2313d58000
>         zio_size = 0x4000
>         zio_objset = 0x0
>         zio_object = 0xc
>         zio_level = 0
>         zio_blkid = 0x0
>         __ttl = 0x1
>         __tod = 0x4a6ea60f 0x1705fcdd

This tells us that object 0xc in metabjset (objset 0x0) is corrupted.

So to get more details you can do the following:

zdb -e -dddd data1

zdb -e -bbcs data1

victor
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to