On Sep 12, 2010, at 7:49 PM, Michael Eskowitz wrote:
> I recently lost all of the data on my single parity raid z array. Each of
> the drives was encrypted with the zfs array built within the encrypted
> volumes.
>
> I am not exactly sure what happened.
Murphy strikes again!
> The files were there and accessible and then they were all gone. The server
> apparently crashed and rebooted and everything was lost. After the crash I
> remounted the encrypted drives and the zpool was still reporting that roughly
> 3TB of the 7TB array were used, but I could not see any of the files through
> the array's mount point. I unmounted the zpool and then remounted it and
> suddenly zpool was reporting 0TB were used.
Were you using zfs send/receive? If so, then this is the behaviour expected
when a
session is interrupted. Since the snapshot did not completely arrive at the
receiver, the
changes are rolled back. It can take a few minutes for terabytes to be freed.
> I did not remap the virtual device. The only thing of note that I saw was
> that the name of storage pool had changed. Originally it was "Movies" and
> then it became "Movita". I am guessing that the file system became corrupted
> some how. (zpool status did not report any errors)
>
> So, my questions are these...
>
> Is there anyway to undelete data from a lost raidz array?
It depends entirely on the nature of the loss. In the case I describe above,
there is nothing
lost because nothing was there (!)
> If I build a new virtual device on top of the old one and the drive topology
> remains the same, can we scan the drives for files from old arrays?
The short answer is no.
> Also, is there any way to repair a corrupted storage pool?
Yes, but it depends entirely on the nature of the corruption.
> Is it possible to backup the file table or whatever partition index zfs
> maintains?
The ZFS configuration data is stored redundantly in the pool and checksummed.
> I imagine that you all are going to suggest that I scrub the array, but that
> is not an option at this point. I had a backup of all of the data lost as I
> am moving between file servers so at a certain point I gave up and decided to
> start fresh. This doesn't give me a warm fuzzy feeling about zfs, though.
AFAICT, ZFS appears to be working as designed. Are you trying to kill the
canary? :-)
-- richard
--
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com
Richard Elling
rich...@nexenta.com +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss