On Thu, Oct 9, 2008 at 7:44 AM, Ahmed Kamal
<[EMAIL PROTECTED]> wrote:
>
>    >
>    >In the past year I've lost more ZFS file systems than I have any other
>    >type of file system in the past 5 years.  With other file systems I
>    >can almost always get some data back.  With ZFS I can't get any back.
>
>> Thats scary to hear!
>>
>
> I am really scared now! I was the one trying to quantify ZFS reliability,
> and that is surely bad to hear!

The circumstances where I have lost data have been when ZFS has not
handled a layer of redundancy.  However, I am not terribly optimistic
of the prospects of ZFS on any device that hasn't committed writes
that ZFS thinks are committed.  Mirrors and raidz would also be
vulnerable to such failures.

I also have run into other failures that have gone unanswered on the
lists.  It makes me wary about using zfs without a support contract
that allows me to escalate to engineering.  Patching only support
won't help.

http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044984.html
   Hang only after I mirrored the zpool, no response on the list

http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048255.html
   I think this is fixed around snv_98, but the zfs-discuss list was
   surprisingly silent on acknowledging it as a problem - I had no
   idea that it was being worked until I saw the commit.  The panic
   seemed to be caused by dtrace - core developers of dtrace
   were quite interested in the kernel crash dump.

http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September/051109.html
   Panic during ON build.  Pool was lost, no response from list.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to