This day went from usual Thursday to worst day of my life in the span of about 
10 seconds.  Here's the scenario:

2 Computer, both Solaris 10u8, one is the primary, one is the backup.  Primary 
system is RAIDZ2, Backup is RAIDZ with 4 drives.  Every night, Primary mirrors 
to Backup using the 'zfs send' command. The Backup receives that with 'zfs recv 
-vFd'.  This ensures that both machines have an identical set of 
filesystems/snapshots every night.  (Snapshots are taken on Primary every hour 
during the workday).

The issue began monday when Primary failed.  After restoring it to operating 
condition I began restoring the filesystems from Backup, again using ZFS 
send/recv.  By midnight, only about half of the data had recovered, at which 
point Primary attempted its regularly schedule mirror operation with Backup.  
One of our primary ZFS filesystems had not yet been restored, and since it 
wasn't on Primary when the mirror operation began, 'zfs recv' destroyed it on 
the Backup system.  AHHHHH.

So, in short, a RAIDZ array contained 7 ZFS filesystems + dozens of snapshots 
in one RAIDZ pool.  12 hours ago some of those filesystems were destroyed, 
effectively by a zfs destroy command (executed by zfs recv).  No data has been 
written to that pool since then.  Is there anyway to revert it to the state it 
was in 12 hours ago?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to