> It seems that maybe there is too large a code path
> leading to panics --
> maybe a side effect of ZFS being "new" (compared to
> other filesystems).  I
> would hope that as these panic issues are coming up
> that the code path
> leading to the panic is evaluated for a specific fix
> or behavior code path.
> Sometimes it does make sense to panic (if there
> _will_ be data damage if
> you continue).  Other times not.
 
I think the same about panics.  So, IMHO, ZFS should not be called "stable".
But you know ... marketing ...  ;)

> I can understand where you are coming from as
>  far as the need for
> ptime and loss of money on that app server. Two years
> of testing for the
> app, Sunfire servers for N+1 because the app can't be
> clustered and you
> have chosen to run a filesystem that has just been
> made public? 

What? That server is running and will be running on UFS for many years!
Upgrading, patching, cleaning ... even touching it is strictly prohibited :)
We upgraded to S10 because of DTrace (helped us a lot) and during the
test phase we evaluated also ZFS.
Now we only use ZFS for our central backup servers (for many applications, 
systems, customers, ...)
We also manage a lot of other systems and always try to migrate customers to 
Solaris because of stability, resource control, DTrace ..  but found ZFS 
disappointing at today (probably tomorrow it will be THE filesystem).

Gino
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to