Last 2 weeks we had 2 zpools corrupted.

Pool was visible via zpool import, but could not be imported anymore. During 
import attempt we got I/O error,

After a first powercut we lost our jumpstart/nfsroot zpool (another pool was 
still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot 
Filesystems where not but those where just test machines.  We thought the 
metadata corruption was caused because of the zfs no cache flush setting we had 
configured in /etc/system (for perfomance reason) in combination with a non 
battery backuppped NVRAM cache (areca raid controller).

zpool was raidz with 10 local sata disks (JBOD mode)


2 days ago we had another powercut in our test labo :-(

And again one pool was lost. This system was not configured with zfs no cache 
flush. On the pool we had +/- 40 zvols used by running vm's (iscsi 
boot/swap/data disks for xen & virtual box guests)

The first failure was on a b68 system, the second on a b77 system.

Last zpool was using iscsi disks: 

setup:

pool
 mirror:
   iscsidisk1 san1
   iscsidisk1 san2
 mirror:
   iscsidisk2 san1
   iscsidisk2 san2

I thought zfs was always persistent on disk, but apparently a power cut has can 
cause unrecoverable damage.

I can accept the first failure (because of the dangerous setting), but loosing 
that second pool was unacceptable for me.

Since no fsck alike utility is available for zfs I was wondering if there are 
any plans to create something like meta data repair tools?

Using ZFS now for almost 1 year I was a big Fan, In one year I lost not 1 zpool 
till last week.

At this time I'm concidering to say ZFS is not yet production ready

any comment welcome...

krdoor
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to