On Tue, Jul 07, 2009 at 12:56:14PM -0700, Mahlon E. Smith wrote: > > I've got a 9 sata drive raidz1 array, started at version 6, upgraded to > version 13. I had an apparent drive failure, and then at some point, a > kernel panic (unrelated to ZFS.) The reboot caused the device numbers > to shuffle, so I did an 'export/import' to re-read the metadata and get > the array back up. > > Once I swapped drives, I issued a 'zpool replace'. > > That was 4 days ago now. The progress in a 'zpool status' looks like > this, as of right now: > > scrub: resilver in progress for 0h0m, 0.00% done, 2251h0m to go > > ... which is a little concerning, since a) it appears to have not moved > since I started it, and b) I'm in a DEGRADED state until it finishes... > if it finishes. > > So, I reach out to the list! > > - Is the resilver progress notification in a known weird state under > FreeBSD? > > - Anything I can do to kick this in the pants? Tuning params? > > - This was my first drive failure under ZFS -- anything I should have > done differently? Such as NOT doing the export/import? (Not sure > what else I could have done there.)
I'm seeing essentially the same think on an 8.0-BETA1 box with an 8-disk raidz1 pool. Every once in a while the system makes it to 0.05% done and gives a vaguely reasonable rebuild time, but it quickly drops back to reports 0.00% and it's basically not making any forward progress. In my case this is a copy of a mirror so while it would be a bit annoying to rebuild, the system could be rebuilt fairly easily. On thing I did just notice is that my zpool version is 13, but my file systems are all v1 rather than the latest (v3). I don't know if this is relevant or not. -- Brooks
pgp5qP7xlqtwD.pgp
Description: PGP signature