Sounds like a bug. What was the panic message?

Jason J. W. Williams wrote:
Just for what its worth, when we rebooted a controller in our array
(we pre-moved all the LUNs to the other controller), despite using
MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
correct controller when this occurred. Its not clear why ZFS thought
it lost a LUN but it did. We have done cable pulling using ZFS/MPXIO
before and that works very well. It may well be array-related in our
case, but I hate anyone to have a false sense of security.

-J

On 12/22/06, Tim Cook <[EMAIL PROTECTED]> wrote:
This may not be the answer you're looking for, but I don't know if it's
something you've thought of.  If you're pulling a LUN from an expensive
array, with multiple HBA's in the system, why not run mpxio?  If you ARE
running mpxio, there shouldn't be an issue with a path dropping.  I have
the setup above in my test lab and pull cables all the time and have yet
to see a zfs kernel panic.  Is this something you've considered?  I
haven't seen the bug in question, but I definitely have not run into it
when running mpxio.

--Tim

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shawn Joy
Sent: Friday, December 22, 2006 7:35 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN
froma SAN

OK,

But lets get back to the original question.

Does ZFS provide you with less features than UFS does on one LUN from a
SAN (i.e is it less stable).

>ZFS on the contrary checks every block it reads and is able to find the
>mirror
>or reconstruct the data in a raidz config.
>Therefore ZFS uses only valid data and is able to repair the data
blocks
>automatically.
>This is not possible in a traditional filesystem/volume manager
>configuration.

The above is fine. If I have two LUNs. But my original question was if I
only have one LUN.

What about kernel panics from ZFS if for instance access to one
controller goes away for a few seconds or minutes. Normally UFS would
just sit there and warn I have lost access to the controller. Then when
the controller returns, after a short period, the warnings go away and
the LUN continues to operate. The admin can then research further into
why the controller went away. With ZFS, the above will panic the system
and possibly cause other coruption  on other LUNs due to this panic? I
believe this was discussed in other threads? I also believe there is a
bug filed against this? If so when should we expect this bug to be
fixed?


My understanding of ZFS is that it functions better in an environment
where we have JBODs attached to the hosts. This way ZFS takes care of
all of the redundancy? But what about SAN enviroments where customers
have spend big money to invest in storage. I know of one instance where
a customer has a growing need for more storage space. There environemt
uses many inodes. Due to the UFS inode limitation, when creating LUNs
over one TB, they would have to quadrulpe the about of storage usesd in
there SAN in order to hold all of the files. A possible solution to this
inode issue would be ZFS. However they have experienced kernel panics in
there environment when a controller dropped of line.

Any body have a solution to this?

Shawn






_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to