[zfs-discuss] Host based zfs config with Oracle's Unified Storage 7000 series

2011-01-03 Thread Shawn Joy
My question regarding the 7000 series storage is in more of the perspective of 
the HOST side ZFS config. It is my understanding that the 7000 storage displays 
a FC lun to the host. Yes, this LUN is a ZFS lun in the 7000 storage, however 
the host still sees this as only one LUN. If I configure a host based ZFS 
storage device on top of this LUN I have no host based zfs redundancy. So do we 
still need to create a host based ZFS mirror or a host based ZFS raidz device 
when use a 7000 series storage array?

Thanks,
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS znapshot of zone that contains ufs SAN attached file systems

2011-01-03 Thread Shawn Joy
Hi All, 

If a zone root is on zfs but that zone also contains SAN attached UFS devices 
what is recorded in a zfs snapshot of the zone? 

Does the snapshot only contain the ZFS root info? 

How would one recover this complete zone?

Thanks,
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-15 Thread Shawn Joy
Prior to this fix ZFS would panic the systems in order to avoid data corruption 
and loss of the zpool. 

Now the pool goes into a degraded or faulted state and one can "try" the zpool 
clear command to correct the issue. If this does not succeed a reboot is 
required.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-14 Thread Shawn Joy
Just to put closure to this discussion about how CR  6565042 and 6322646 change 
how ZFS functions with in the below scenario. 

>ZFS no longer has the issue where loss of a single device (even
>intermittently) causes pool corruption. That's been fixed.
>
>That is, there used to be an issue in this scenario:
>
>(1) zpool constructed from a single LUN on a SAN device
>(2) SAN experiences temporary outage, while ZFS host remains running.
>(3) zpool is permanently corrupted, even if no I/O occured during outage
>
>This is fixed. (around b101, IIRC)
>
>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6." 

After doing further research, and speaking with the CR engineers,  the CR 
changes seem to be included in an overall fix for ZFS panic situations. The 
Zpool can still go into a degraded or faulted state, which will require manual 
intervention by the user. 

This fix was discussed above in information from infodoc 211349 Solaris[TM] ZFS 
& Write Failure

 "ZFS will handle the drive failures gracefully as part of the BUG 6322646 fix 
in the case of non-redundant configurations by degrading the pool instead of 
initiating a system panic with the help of Solaris[TM] FMA framework."
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-13 Thread Shawn Joy
>In life there are many things that we "should do" (but often don't).
>There are always trade-offs. If you need your pool to be able to
>operate with a device missing, then the pool needs to have sufficient
>redundancy to keep working. If you want your pool to survive if a
>disk gets crushed by a wayward fork lift, then you need to have
>redundant storage so that the data continues to be available.
>
>If the devices are on a SAN and you want to be able to continue
>operating while there is a SAN failure, then you need to have
>redundant SAN switches, redundant paths, and redundant storage
>devices, preferably in a different chassis.

Yes, of course. This is part of normal SAN design. 

The ZFS file systems is what is different here. If a either a HBA, fibre cable, 
redundant controller fail or firmware issues on a array redundant controller 
occur then SSTM (MPXIO) will see the issue and try and fail things over to the 
other controller. Of course this reaction at the SSTM level takes time. UFS 
simply allows this to happen. It is my understanding ZFS can have issues with 
this hence the reason why a zfs mirror or raidz device is required. 

Still not clear how the above mentioned BUGS change the behavior of zfs and if 
they change the recommendations of the zpool man page.

>
>Bob
>--
>Bob Friesenhahn
>bfriesen at simple dot dallas dot tx dot us, 
>http://www.simplesystems.org/users/bfriesen/
>GraphicsMagick Maintainer, http://www.GraphicsMagick.org/



___
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-11 Thread Shawn Joy
>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6.
>
>I looked at this CR, forgive me but I am not a ZFS engineer. Can you explain 
>in, >simple terms, how ZFS now reacts to this? If it does not panic how does 
>it insure >data is save?

Found some  conflicting information

Infodoc: 211349 Solaris[TM] ZFS & Write Failure. 

"ZFS will handle the drive failures gracefully as part of the BUG 6322646 fix 
in the case of non-redundant configurations by degrading the pool instead of 
initiating a system panic with the help of Solaris[TM] FMA framework."

>From Richards post above.
"NB definitions of the pool states, including "degraded" are in the
zpool(1m)
man page.
-- richard"

>From zpool man page located below.
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m?l=en&a=view&q=zpool

"Device Failure and Recovery

  ZFS supports a rich set of mechanisms for handling device failure and 
data corruption. All metadata and data is checksummed, and ZFS automatically 
repairs bad data from a good copy when corruption is detected.

  In order to take advantage of these features, a pool must make use of 
some form of redundancy, using either mirrored or raidz groups. While ZFS 
supports running in a non-redundant configuration, where each root vdev is 
simply a disk or file, this is strongly discouraged. A single case of bit 
corruption can render some or all of your data unavailable.

  A pool's health status is described by one of three states: online, 
degraded, or faulted. An online pool has all devices operating normally. A 
degraded pool is one in which one or more devices have failed, but the data is 
still available due to a redundant configuration. A faulted pool has corrupted 
metadata, or one or more faulted devices, and insufficient replicas to continue 
functioning.

  The health of the top-level vdev, such as mirror or raidz device, is 
potentially impacted by the state of its associated vdevs, or component 
devices. A top-level vdev or component device is in one of the following 
states:"

So from the zpool man page it seems that it is not possible to put a single 
device zpool in a degraded state. Is this correct or does the fix in Bugs 
6565042 and 6322646 change this behavior. 


>
>Also, just want to ensure everyone is on the same page here. There seems to be 
>>some mixed messages in this thread about how sensitive ZFS is to SAN issues.
>
>Do we all agree that creating a zpool out of one device in a SAN environment 
>is >not recommended. One should always constructs a zfs mirror or raidz device 
>out >of SAN attached devices, as posted in the ZFS FAQ?

The zpool man page seem to agree with this. Is this correct?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-11 Thread Shawn Joy
>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6.

I looked at this CR, forgive me but I am not a ZFS engineer. Can you explain 
in, simple terms, how ZFS now reacts to this? If it does not panic how does it 
insure data is save? 

Also, just want to ensure everyone is on the same page here. There seems to be 
some mixed messages in this thread about how sensitive ZFS is to SAN issues. 

Do we all agree that creating a zpool out of one device in a SAN environment is 
not recommended. One should always constructs a zfs mirror or raidz device out 
of SAN attached devices, as posted in the ZFS FAQ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread Shawn Joy
>If you don't give ZFS any redundancy, you risk loosing you pool if 
there is data corruption.


Is this the same risk for data corruption as  UFS on hardware based luns?

If we present one LUN to ZFS and choose not to ZFS mirror or do a raidz 
pool of that LUN is ZFS able to handle disk or raid controllers failures 
on the hardware array?


Does ZFS handle intermittent controller outages on the raid controllers 
the same as what UFS would?


Thanks,
Shawn

Ian Collins wrote:

Shawn Joy wrote:

Hi All,
Its been a while since I touched zfs. Is the below still the case 
with zfs and hardware raid array? Do we still need to provide two 
luns from the hardware raid then zfs mirror those two luns?


http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid

  

Need, no.  Should, yes.

The last two points on that page are key:

"Overall, ZFS functions as designed with SAN-attached devices, but if 
you expose simpler devices to ZFS, you can better leverage all 
available features.


In summary, if you use ZFS with SAN-attached devices, you can take 
advantage of the self-healing features of ZFS by configuring 
redundancy in your ZFS storage pools even though redundancy is 
available at a lower hardware level."


If you don't give ZFS any redundancy, you risk loosing you pool if 
there is data corruption.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-09 Thread Shawn Joy
>If you don't give ZFS any redundancy, you risk loosing you pool if there is 
>data corruption.

Is this the same risk for data corruption as  UFS on hardware based luns?

If we present one LUN to ZFS and choose not to ZFS mirror or do a raidz pool of 
that LUN is ZFS able to handle disk or raid controllers failures on the 
hardware array?

Does ZFS handle intermittent controller outages on the raid controllers the 
same as what UFS would?

Thanks,
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-09 Thread Shawn Joy
Hi All, 

Its been a while since I touched zfs. Is the below still the case with zfs and 
hardware raid array? Do we still need to provide two luns from the hardware 
raid then zfs mirror those two luns?

http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid

Thanks, 
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "Poor Man's Cluster" using zpool export and zpool import

2009-07-08 Thread Shawn Joy
Thanks Cindy and Darren
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] "Poor Man's Cluster" using zpool export and zpool import

2009-07-08 Thread Shawn Joy
Is it supported to use zpool export and zpool import to manage disk access 
between two nodes that have access to the same storage device. 

What issues exist if the host currently owning the zpool goes down? In this 
case will using zpool import -f work? Is there possible data corruption issues? 

Thanks,
Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot and data on same disk - is this supported?

2008-12-18 Thread Shawn Joy

I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

However I have questions whether we support using slices for data on the 
same disk as we use for ZFS boot. What issues does this create if we 
have a disk failure in a mirrored environment? Does anyone have examples 
of customers doing this in production environments.

I have a customer looking to use ZFS boot but they only have two disks 
in their server and it is not connected to a SAN. They also need space 
for data what is he best recommendation?

Please respond to me directly as I am not on this alias.

Shawn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot Solaris 10/08 whole disk or slice

2008-12-18 Thread Shawn joy
If one chooses to do this what happens if you have a disk failure. 

>From the ZFS Best practices guide.
The recovery process of replacing a failed disk is more complex when disks 
contain both ZFS and UFS file systems on 
   slices. 


Shawn
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs boot Solaris 10/08 whole disk or slice

2008-12-18 Thread Shawn joy
Hi All,

I see from the zfs Best practices guide 

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

 ZFS Root Pool Considerations

* A root pool must be created with disk slices rather than whole disks. 
Allocate the entire disk capacity for the root pool to slice 0, for example, 
rather than partition the disk that is used for booting for many different 
uses. 

What issues are there if one would like to uses other slices on the same disk 
for data. Is this even supported in Solaris 10/08?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror a slice

2007-12-13 Thread Shawn Joy
What are the commands? Everything I see is c1t0d0, c1t1d0.   no 
slice just the completed disk.



Robert Milkowski wrote:
> Hello Shawn,
> 
> Thursday, December 13, 2007, 3:46:09 PM, you wrote:
> 
> SJ> Is it possible to bring one slice of a disk under zfs controller and 
> SJ> leave the others as ufs?
> 
> SJ> A customer is tryng to mirror one slice using zfs.
> 
> 
> Yes, it's - it just works.
> 
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mirror a slice

2007-12-13 Thread Shawn Joy
Is it possible to bring one slice of a disk under zfs controller and 
leave the others as ufs?

A customer is tryng to mirror one slice using zfs.

Please respond to me directly and to the alias.

Thanks,
Shawn

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Shawn Joy

No,

I have not played with this. As I do not have access to my customer 
site. They have tested this themselves. It is unclear if they 
implemented this on a MPXIO/SSTM device. I will ask this question.


Thanks,
Shawn

Tim Cook wrote:

This may not be the answer you're looking for, but I don't know if it's
something you've thought of.  If you're pulling a LUN from an expensive
array, with multiple HBA's in the system, why not run mpxio?  If you ARE
running mpxio, there shouldn't be an issue with a path dropping.  I have
the setup above in my test lab and pull cables all the time and have yet
to see a zfs kernel panic.  Is this something you've considered?  I
haven't seen the bug in question, but I definitely have not run into it
when running mpxio.

--Tim

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shawn Joy
Sent: Friday, December 22, 2006 7:35 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN
froma SAN

OK,

But lets get back to the original question.

Does ZFS provide you with less features than UFS does on one LUN from a
SAN (i.e is it less stable).


ZFS on the contrary checks every block it reads and is able to find the
mirror
or reconstruct the data in a raidz config.
Therefore ZFS uses only valid data and is able to repair the data

blocks

automatically.
This is not possible in a traditional filesystem/volume manager
configuration.


The above is fine. If I have two LUNs. But my original question was if I
only have one LUN. 


What about kernel panics from ZFS if for instance access to one
controller goes away for a few seconds or minutes. Normally UFS would
just sit there and warn I have lost access to the controller. Then when
the controller returns, after a short period, the warnings go away and
the LUN continues to operate. The admin can then research further into
why the controller went away. With ZFS, the above will panic the system
and possibly cause other coruption  on other LUNs due to this panic? I
believe this was discussed in other threads? I also believe there is a
bug filed against this? If so when should we expect this bug to be
fixed?


My understanding of ZFS is that it functions better in an environment
where we have JBODs attached to the hosts. This way ZFS takes care of
all of the redundancy? But what about SAN enviroments where customers
have spend big money to invest in storage. I know of one instance where
a customer has a growing need for more storage space. There environemt
uses many inodes. Due to the UFS inode limitation, when creating LUNs
over one TB, they would have to quadrulpe the about of storage usesd in
there SAN in order to hold all of the files. A possible solution to this
inode issue would be ZFS. However they have experienced kernel panics in
there environment when a controller dropped of line.

Any body have a solution to this?

Shawn
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Shawn Joy
Systems Support Specialist

Sun Microsystems, Inc.
1550 Bedford Highway, Suite 302
Bedford, Nova Scotia B4A 1E6 CA
Phone 902-832-6213
Fax 902-835-6321
Email [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Shawn Joy
OK,

But lets get back to the original question.

Does ZFS provide you with less features than UFS does on one LUN from a SAN 
(i.e is it less stable).

>ZFS on the contrary checks every block it reads and is able to find the
>mirror
>or reconstruct the data in a raidz config.
>Therefore ZFS uses only valid data and is able to repair the data blocks
>automatically.
>This is not possible in a traditional filesystem/volume manager
>configuration.

The above is fine. If I have two LUNs. But my original question was if I only 
have one LUN. 

What about kernel panics from ZFS if for instance access to one controller goes 
away for a few seconds or minutes. Normally UFS would just sit there and warn I 
have lost access to the controller. Then when the controller returns, after a 
short period, the warnings go away and the LUN continues to operate. The admin 
can then research further into why the controller went away. With ZFS, the 
above will panic the system and possibly cause other coruption  on other LUNs 
due to this panic? I believe this was discussed in other threads? I also 
believe there is a bug filed against this? If so when should we expect this bug 
to be fixed?


My understanding of ZFS is that it functions better in an environment where we 
have JBODs attached to the hosts. This way ZFS takes care of all of the 
redundancy? But what about SAN enviroments where customers have spend big money 
to invest in storage. I know of one instance where a customer has a growing 
need for more storage space. There environemt uses many inodes. Due to the UFS 
inode limitation, when creating LUNs over one TB, they would have to quadrulpe 
the about of storage usesd in there SAN in order to hold all of the files. A 
possible solution to this inode issue would be ZFS. However they have 
experienced kernel panics in there environment when a controller dropped of 
line.

Any body have a solution to this?

Shawn
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-21 Thread Shawn Joy
All,

I understand that ZFS gives you more error correction when using two LUNS from 
a SAN. But, does it provide you with less features than UFS does on one LUN 
from a SAN (i.e is it less stable).

Thanks,
Shawn
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss