Weird. I have no idea how you could remove that file (beside destroying the
entire filesystem)...
One other thing I noticed:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 8
raidz1 ONLINE 0 0 8
c0t7d0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
When you see non-zero CKSUM error counters at the pool or raidz1/z2 vdev
level, but no error on the devices like this, it means that ZFS couldn't
correct the corruption errors after multiple attempts of reconstructing the
stripes, each time assuming a different device was corrupting data. IOW it
means that 2+ (in a raidz1) or 3+ (in a raidz2) devices returned corrupted
data in the same stripe. Since it is statistically improbable to have that
many silent data corruption in the same stripe, most likely this condition
indicates a hardware problem. I suggest running memtest to stress-test your
cpu/mem/mobo.
-marc
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss