Excellent.
I think you are good for now as long as your hardware setup is stable.
You survived a severe hardware failure so say a prayer and make sure
this doesn't happen again. Always have good backups.
Thanks,
Cindy
On 02/01/11 06:56, Mike Tancsa wrote:
On 1/31/2011 4:19 PM, Mike Tancsa wrote:
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption is.
First, I would try least destruction method: Try to remove the
files listed below by using the rm command.
This entry probably means that the metadata is corrupted or some
other file (like a temp file) no longer exists:
tank1/argus-data:<0xc6>
Hi Cindy,
I removed the files that were listed, and now I am left with
errors: Permanent errors have been detected in the following files:
tank1/argus-data:<0xc5>
tank1/argus-data:<0xc6>
tank1/argus-data:<0xc7>
I have started a scrub
scrub: scrub in progress for 0h48m, 10.90% done, 6h35m to go
Looks like that was it! The scrub finished in the time it estimated and
that was all I needed to do. I did not have to to do zpool clear or any
other commands. Is there anything beyond scrub to check the integrity
of the pool ?
0(offsite)# zpool status -v
pool: tank1
state: ONLINE
scrub: scrub completed after 7h32m with 0 errors on Mon Jan 31 23:00:46
2011
config:
NAME STATE READ WRITE CKSUM
tank1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad0 ONLINE 0 0 0
ad1 ONLINE 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada5 ONLINE 0 0 0
ada8 ONLINE 0 0 0
ada7 ONLINE 0 0 0
ada6 ONLINE 0 0 0
errors: No known data errors
0(offsite)#
---Mike
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss