On May 10, 2010, at 4:46 PM, John Balestrini wrote:

Recently I had a similar issue where the pool wouldn't import and attempting to import it would essentially lock the server up. Finally I used pfexec zpool import -F pool1 and simply let it do it's thing. After almost 60 hours the imported finished and all has been well since (except my backup procedures have improved!).

Hey John,

thanks a lot for answering -- I already allowed the "zpool import" command to run from Friday to Monday and it did not complete -- I also made sure to start it using "truss" and literally nothing has happened during that time (the truss output file does not have anything new).

While the "zpool import" command runs, I don't see any CPU or Disk I/O usage. "zpool iostat" shows very little I/O too:

# zpool iostat -v
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
backup        31.4T  19.1T     11      2  29.5K  11.8K
  raidz1      11.9T   741G      2      0  3.74K  3.35K
    c3t102d0      -      -      0      0  23.8K  1.99K
    c3t103d0      -      -      0      0  23.5K  1.99K
    c3t104d0      -      -      0      0  23.0K  1.99K
    c3t105d0      -      -      0      0  21.3K  1.99K
    c3t106d0      -      -      0      0  21.5K  1.98K
    c3t107d0      -      -      0      0  24.2K  1.98K
    c3t108d0      -      -      0      0  23.1K  1.98K
  raidz1      12.2T   454G      3      0  6.89K  3.94K
    c3t109d0      -      -      0      0  43.7K  2.09K
    c3t110d0      -      -      0      0  42.9K  2.11K
    c3t111d0      -      -      0      0  43.9K  2.11K
    c3t112d0      -      -      0      0  43.8K  2.09K
    c3t113d0      -      -      0      0  47.0K  2.08K
    c3t114d0      -      -      0      0  42.9K  2.08K
    c3t115d0      -      -      0      0  44.1K  2.08K
  raidz1      3.69T  8.93T      3      0  9.42K    610
    c3t87d0       -      -      0      0  43.6K  1.50K
    c3t88d0       -      -      0      0  43.9K  1.48K
    c3t89d0       -      -      0      0  44.2K  1.49K
    c3t90d0       -      -      0      0  43.4K  1.49K
    c3t91d0       -      -      0      0  42.5K  1.48K
    c3t92d0       -      -      0      0  44.5K  1.49K
    c3t93d0       -      -      0      0  44.8K  1.49K
  raidz1      3.64T  8.99T      3      0  9.40K  3.94K
    c3t94d0       -      -      0      0  31.9K  2.09K
    c3t95d0       -      -      0      0  31.6K  2.09K
    c3t96d0       -      -      0      0  30.8K  2.08K
    c3t97d0       -      -      0      0  34.2K  2.08K
    c3t98d0       -      -      0      0  34.4K  2.08K
    c3t99d0       -      -      0      0  35.2K  2.09K
    c3t100d0      -      -      0      0  34.9K  2.08K
------------  -----  -----  -----  -----  -----  -----

Also, the third "raidz" entry shows less "write" in bandwidth (610). This is actually the first time it's a non-zero value.

My last attempt to import it, was using this command:

zpool import -o failmode=panic -f -R /altmount backup

However it did not panic. As I mentioned in the first message, it mounts 189 filesystems and hangs on #190. While the command is hanging, I can use "zfs mount" to mount filesystems #191 and above (only one filesystem does not mount and causes the import procedure to hang).

Before trying the command above, I was using only "zpool import backup", and the "iostat" output was showing ZERO for the third raidz from the list above (not sure if that means something, but it does look odd).

I'm really on a dead end here, any help is appreciated.

Thanks,
Eduardo Bragatto.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to