Re: [zfs-discuss] ZFS and powerpath

2007-07-23 Thread Moore, Joe
Brian Wilson wrote:
> On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
> > Darren Dunham wrote:
> >> My previous experience with powerpath was that it rode below the  
> >> Solaris
> >> device layer.  So you couldn't cause trespass by using the "wrong"
> >> device.  It would just go to powerpath which would choose the link
to
> >> use on its own.
> >>
> >> Is this not true or has it changed over time?
> > I haven't looked at power path for some time but it used to be the
> > opposite. The powerpath node sat on top of the actual device paths.

> > One of the selling points of mpxio is that it doesn't have that  
> > problem. (At least for devices it supports.) Most of the multipath
software had  
> > that same limitation
> >
> 
> I agree, it's not true.  I don't know how long it hasn't been true,  
> but the last year and a half I've been implementing PowerPath on  
> Solaris 8, 9, 10, the way to make it work is to point whatever disk  
> tool you're using to the emcpower device.  The other paths are there  
> because leadville finds them and creates them (if you're using  
> leadville), but PowerPath isn't doing anything to make them  
> redundant, it's giving you the emcpower device and the emcp, etc.  
> drivers to front end them and give you a multipathed device (the  
> emcpower device).  It DOES choose which one to use, for all 
> I/O going  
> through the emcpower device.  In a situation where you lose 
> paths and  
> I/O is moving, you'll see scsi errors down one path, then the next,  
> then the next, as PowerPath gets fed the scsi error and tries the  
> next device path.  If you use those actual device paths, you're not  
> actually getting a device that PowerPath is multipathing for you  
> (i.e. it does not dig in beneath the scsi driver)

I'm afraid I have to disagree with you: I'm using the
/dev/dsk/c2t$WWNdXs2 devices quite happily with powerpath handling
failover for my clariion.

# powermt version
EMC powermt for PowerPath (c) Version 4.4.0 (build 274)
# powermt display dev=58
Pseudo name=emcpower58a
CLARiiON ID=APM00051704678 [uscicsap1]
Logical device ID=6006016067E51400565259A15331DB11 [saperqdb1:
/oracle/Q02/saparch]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A

==
 Host ---   - Stor -   -- I/O Path -  --
Stats ---
### HW Path I/O PathsInterf.   ModeState  Q-IOs
Errors

==
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED] c2t5006016130202E48d58s0 
SP A1 active
alive  0  0
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED] c2t5006016930202E48d58s0 
SP B1 active
alive  0  0
# fsck /dev/dsk/c2t5006016130202E48d58s0
** /dev/dsk/c2t5006016130202E48d58s0
** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups

FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n

144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0%
fragmentation)
# fsck /dev/dsk/c2t5006016930202E48d58s0
** /dev/dsk/c2t5006016930202E48d58s0
** Last Mounted on /zones/saperqdb1/root/oracle/Q02/saparch
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups

FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? n

144 files, 189504 used, 33832172 free (420 frags, 4228969 blocks, 0.0%
fragmentation)

### So at this point, I can look down either path and get to my data.
Now I kill 1 of the 2 paths via SAN zoning.  cfgadm -c configure c2, and
powermt check reports that the path to SP A is now dead.  I'm still able
to fsck the dead path:
# cfgadm -c configure c2
# powermt check
Warning: CLARiiON device path c2t5006016130202E48d58s0 is currently
dead.
Do you want to remove it (y/n/a/q)? n
# powermt display dev=58
Pseudo name=emcpower58a
CLARiiON ID=APM00051704678 [uscicsap1]
Logical device ID=6006016067E51400565259A15331DB11 [saperqdb1:
/oracle/Q02/saparch]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP B

==
 Host ---   - Stor -   -- I/O Path -  --
Stats ---
### HW Path I/O PathsInterf.   ModeState  Q-IOs
Errors

==
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED] c2t5006016130202E48d58s0 
SP A1 active
dead   0  1
3073 [EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED] c2t5006016930202E48d58s0 
SP B1 active
alive  0  0
# fsck /dev/dsk/c2t5006016130202E48d58s0
** /dev/dsk/c2t5006016130202E48d58s0
** Last M

Re: [zfs-discuss] ZFS and powerpath

2007-07-18 Thread Todd Moore
There is an open issue/bug with ZFS and EMC PowerPath for Solaris 10 in x86/x64 
space.  My customer encountered the issue back in April 2007 and is awaiting 
the patch.  We're expecting an update (hopefully a patch) by the end of July 
2007.

As I recall, it did involve CX arrays and "trespass" functionality.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Re: [zfs-discuss] ZFS and powerpath

2007-07-17 Thread John Martinez

On Jul 15, 2007, at 12:59 PM, Peter Tribble wrote:

> On 7/13/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
>> ZFS needs to use the top level multipath device or bad things will
>> probably happen in a failover or in initial zpool creation. Fopr
>> example: You'll try to use the device on two paths and cause a lun
>> failover to occur.
>>
>> Mpxio fixes a lot of these issues. I strongly suggest using mpxio
>> instead of powerpath but sometimes its all you can use if the  
>> array is
>> new and mpxio doesn't have the hooks for it ... yet.
>
> Hm. This is pretty old stuff, and what is irritating is that I had  
> it all
> working under mpxio. Then I was told the system had to be reconfigured
> to use powerpath, and I've not seen my data since.
>
> (I follow the logic that it's the datacenter standard, although I'm no
> longer sure I agree with it based on my experience so far; nor does
> my own experience match the alleged technical superiority of
> powerpath over mpxio. Ho hum.)


I would definitely offer to work with your SAN team on getting MPxIO  
"certified" with your arrays. Where I work, we use Veritas DMP with  
our Hitachi arrays. When we go to ZFS, we will, of course, go to  
MPxIO instead of Hitachi's HDLM. The only difference from this thread  
is that we use Sun-branded Qlogic HBAs instead of Emulex on both our  
SPARC and x64 Sun servers.

It's been slow going for me, but I've had great success working with  
my SAN team (who in turn work with HDS), on getting new technologies*  
working in our environment.

-john

*Our SAN team runs things very conservatively. They consider "new"  
technologies to be things introduced one to two years ago.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-17 Thread Brian Wilson



On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:


Darren Dunham wrote:

If it helps at all.  We're having a similar problem.  Any LUN's
configured with their default owner to be SP B, don't get along  
with
ZFS.   We're running on a T2000, With Emulex cards and the ssd  
driver.

MPXIO seems to work well for most cases, but the SAN guys are not
comfortable with it.


Are you using the top level powerpath device? Is the clariion in an
auto-trespass mode where any i/o going down the alt path will  
cause the

LUNs to move?



My previous experience with powerpath was that it rode below the  
Solaris

device layer.  So you couldn't cause trespass by using the "wrong"
device.  It would just go to powerpath which would choose the link to
use on its own.

Is this not true or has it changed over time?




I haven't looked at power path for some time but it used to be the
opposite. The powerpath node sat on top of the actual device paths.  
One
of the selling points of mpxio is that it doesn't have that  
problem. (At
least for devices it supports.) Most of the multipath software had  
that

same limitation



I agree, it's not true.  I don't know how long it hasn't been true,  
but the last year and a half I've been implementing PowerPath on  
Solaris 8, 9, 10, the way to make it work is to point whatever disk  
tool you're using to the emcpower device.  The other paths are there  
because leadville finds them and creates them (if you're using  
leadville), but PowerPath isn't doing anything to make them  
redundant, it's giving you the emcpower device and the emcp, etc.  
drivers to front end them and give you a multipathed device (the  
emcpower device).  It DOES choose which one to use, for all I/O going  
through the emcpower device.  In a situation where you lose paths and  
I/O is moving, you'll see scsi errors down one path, then the next,  
then the next, as PowerPath gets fed the scsi error and tries the  
next device path.  If you use those actual device paths, you're not  
actually getting a device that PowerPath is multipathing for you  
(i.e. it does not dig in beneath the scsi driver)


I haven't had any problem making Veritas, Disksuite, or in a very few  
cases so far ZFS work by pointing them at the emcpower devices. (note  
that 'not having any problem' included reading the PowerPath manuals  
and docs before implementing it to make sure it's being done 'right'  
according to EMC's procedures, not just Sun's or Veritas')  I haven't  
disected


cheers,
Brian




However, I'm not an expert on powerpath by any stretch of the
imagination. I just took a quick look at the powerpath manual (4.0
version.) and it says you can now use both types which seems a little
confusing. Again, I's be interested to see if using the pseudo-device
works better.not to mention how it works using the direct path  
disk

entry.





smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-16 Thread Torrey McMahon
Darren Dunham wrote:
>>> If it helps at all.  We're having a similar problem.  Any LUN's 
>>> configured with their default owner to be SP B, don't get along with 
>>> ZFS.   We're running on a T2000, With Emulex cards and the ssd driver.  
>>> MPXIO seems to work well for most cases, but the SAN guys are not 
>>> comfortable with it.
>>>   
>> Are you using the top level powerpath device? Is the clariion in an 
>> auto-trespass mode where any i/o going down the alt path will cause the 
>> LUNs to move?
>> 
>
> My previous experience with powerpath was that it rode below the Solaris
> device layer.  So you couldn't cause trespass by using the "wrong"
> device.  It would just go to powerpath which would choose the link to
> use on its own.
>
> Is this not true or has it changed over time?
>   

I haven't looked at power path for some time but it used to be the 
opposite. The powerpath node sat on top of the actual device paths. One 
of the selling points of mpxio is that it doesn't have that problem. (At 
least for devices it supports.) Most of the multipath software had that 
same limitation

However, I'm not an expert on powerpath by any stretch of the 
imagination. I just took a quick look at the powerpath manual (4.0 
version.) and it says you can now use both types which seems a little 
confusing. Again, I's be interested to see if using the pseudo-device 
works better.not to mention how it works using the direct path disk 
entry.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-16 Thread Darren Dunham
> > If it helps at all.  We're having a similar problem.  Any LUN's 
> > configured with their default owner to be SP B, don't get along with 
> > ZFS.   We're running on a T2000, With Emulex cards and the ssd driver.  
> > MPXIO seems to work well for most cases, but the SAN guys are not 
> > comfortable with it.
> 
> Are you using the top level powerpath device? Is the clariion in an 
> auto-trespass mode where any i/o going down the alt path will cause the 
> LUNs to move?

My previous experience with powerpath was that it rode below the Solaris
device layer.  So you couldn't cause trespass by using the "wrong"
device.  It would just go to powerpath which would choose the link to
use on its own.

Is this not true or has it changed over time?

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-16 Thread Torrey McMahon
Carisdad wrote:
> Peter Tribble wrote:
>   
>> # powermt display dev=all
>> Pseudo name=emcpower0a
>> CLARiiON ID=APM00043600837 []
>> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
>> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
>> Owner: default=SP B, current=SP B
>> ==
>>  Host ---   - Stor -   -- I/O Path -  -- Stats 
>> ---
>> ###  HW PathI/O PathsInterf.   ModeState  Q-IOs 
>> Errors
>> ==
>> 3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
>> c2t500601613060099Cd1s0 SP A1
>> active  alive  0  0
>> 3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
>> c2t500601693060099Cd1s0 SP B1
>> active  alive  0  0
>> 3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
>> c3t500601603060099Cd1s0 SP A0
>> active  alive  0  0
>> 3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
>> c3t500601683060099Cd1s0 SP B0
>> active  alive  0  0
>>   
>>
>> 
> If it helps at all.  We're having a similar problem.  Any LUN's 
> configured with their default owner to be SP B, don't get along with 
> ZFS.   We're running on a T2000, With Emulex cards and the ssd driver.  
> MPXIO seems to work well for most cases, but the SAN guys are not 
> comfortable with it.

Are you using the top level powerpath device? Is the clariion in an 
auto-trespass mode where any i/o going down the alt path will cause the 
LUNs to move?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-15 Thread Peter Tribble
On 7/13/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> ZFS needs to use the top level multipath device or bad things will
> probably happen in a failover or in initial zpool creation. Fopr
> example: You'll try to use the device on two paths and cause a lun
> failover to occur.
>
> Mpxio fixes a lot of these issues. I strongly suggest using mpxio
> instead of powerpath but sometimes its all you can use if the array is
> new and mpxio doesn't have the hooks for it ... yet.

Hm. This is pretty old stuff, and what is irritating is that I had it all
working under mpxio. Then I was told the system had to be reconfigured
to use powerpath, and I've not seen my data since.

(I follow the logic that it's the datacenter standard, although I'm no
longer sure I agree with it based on my experience so far; nor does
my own experience match the alleged technical superiority of
powerpath over mpxio. Ho hum.)

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-15 Thread Peter Tribble
On 7/15/07, JS <[EMAIL PROTECTED]> wrote:
>
>  I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with 
> Powerpath 4.5.0(and MPxIOin other cases) and Clariion arrays and have never 
> seen this problem. In fact I'm trying to get rid of my PowerPath instances 
> and standardizing on MPxIO and when I've destroyed PP devices that lead to a 
> zpool and rediscover the devices the pools show up healthy with the new MPxIO 
> names. This is all using Update 3 with 118833-36.

That's what I would have expected to happen. We're going the other way
but all I thought was going to happen was that the paths would change
but everything else would be fine.

Unfortunately not :-(

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-15 Thread JS
> Shows up as lpfc (is that Emulex?)

lpfc  (or fibre-channel) is an Emulex branded emulex card device - sun branded 
emulex uses the emlxs driver. 

 I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath 
4.5.0(and MPxIOin other cases) and Clariion arrays and have never seen this 
problem. In fact I'm trying to get rid of my PowerPath instances and 
standardizing on MPxIO and when I've destroyed PP devices that lead to a zpool 
and rediscover the devices the pools show up healthy with the new MPxIO names. 
This is all using Update 3 with 118833-36.

 As an HBA note, I have a pair of Emulex LP9802s (lpfc devices) with proper 
firmware for the CX-600 on a v890 using zpools and a ridiculous number of 
device errors (esp Page 83 errors). Other systems using Sun Branded Emulex 
Cards (SG-XPCI1FC-EM2) don't show these errors and I'm swapping the cards later 
this month to get rid of the errors.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Carisdad
Peter Tribble wrote:
> # powermt display dev=all
> Pseudo name=emcpower0a
> CLARiiON ID=APM00043600837 []
> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
> Owner: default=SP B, current=SP B
> ==
>  Host ---   - Stor -   -- I/O Path -  -- Stats ---
> ###  HW PathI/O PathsInterf.   ModeState  Q-IOs Errors
> ==
> 3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
> c2t500601613060099Cd1s0 SP A1
> active  alive  0  0
> 3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
> c2t500601693060099Cd1s0 SP B1
> active  alive  0  0
> 3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
> c3t500601603060099Cd1s0 SP A0
> active  alive  0  0
> 3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
> c3t500601683060099Cd1s0 SP B0
> active  alive  0  0
>   
>
If it helps at all.  We're having a similar problem.  Any LUN's 
configured with their default owner to be SP B, don't get along with 
ZFS.   We're running on a T2000, With Emulex cards and the ssd driver.  
MPXIO seems to work well for most cases, but the SAN guys are not 
comfortable with it.

-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Darren Dunham
> Doesn't that then create dependence on the cxtxdxsx device name to be
> available?  
> 
> /dev/dsk/c2t500601601020813Ed0s0 = path1
> /dev/dsk/c2t500601681020813Ed0s0 = path2
> /dev/dsk/emcpower0a = pseudo device pointing to both paths.
> 
> So if you've got a zpool on /dev/dsk/c2t500601601020813Ed0s0 and that
> path becomes unavailable (perhaps due to device renumber or failure),
> won't the zpool be unavailable?  Whereas a zpool on /dev/dsk/emcpower0a
> will automagically handle the situation (assuming the 2nd path is
> available)?

How would the path become unavailable?  While it looks like a raw path,
it's still being managed by powerpath.  So during a boot, it should not
suddenly disappear, even if the path goes away. 

If it were to go away between boots, then I see it as the same situation
for a disk being renumbered/repathed on "normal" single-pathed storage.
ZFS needs to be able to scan all paths until it finds the disk again.
The pool should not be tied to the path.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Torrey McMahon
[EMAIL PROTECTED] wrote:
>
>
>
> [EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:
>
>   
>> Peter Tribble wrote:
>>
>> 
>>> I've not got that far. During an import, ZFS just pokes around - there
>>> doesn't seem to be an explicit way to tell it which particular devices
>>> or SAN paths to use.
>>>   
>> You can't tell it which devices to use in a straightforward manner. But
>> you can tell it which directories to scan.
>>
>> zpool import [-d dir]
>>
>> By default, it scans /dev/dsk.
>>
>> Does truss of zfs import show the powerrpath devices being opened and
>> read from?
>> 
>
>
> AFAIK powerpath does not really need to use the powerpath pseudo devices --
> they are just there for convenience.  I would expect the drives to be
> readable from either the c1 devices or emc*.

ZFS needs to use the top level multipath device or bad things will 
probably happen in a failover or in initial zpool creation. Fopr 
example: You'll try to use the device on two paths and cause a lun 
failover to occur.

Mpxio fixes a lot of these issues. I strongly suggest using mpxio 
instead of powerpath but sometimes its all you can use if the array is 
new and mpxio doesn't have the hooks for it ... yet.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Torrey McMahon
Peter Tribble wrote:
> On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
>   
>> I wonder what kind of card Peter's using and if there is a potential
>> linkage there.  We've got the Sun branded Emulux cards in our sparcs.  I
>> also wonder if Peter were able to allocate an additional LUN to his
>> system whether or not he'd be able to create a pool on that new LUN.
>> 
>
> On a different continent and I didn't buy it. Shows up as lpfc (is
> that Emulex?). I'm not sure that's related - I can see the LUNs
> and devices, it's just that zfs isn't happy.

Those, lpfc, are native Emulex drivers.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
Doesn't that then create dependence on the cxtxdxsx device name to be
available?  

/dev/dsk/c2t500601601020813Ed0s0 = path1
/dev/dsk/c2t500601681020813Ed0s0 = path2
/dev/dsk/emcpower0a = pseudo device pointing to both paths.

So if you've got a zpool on /dev/dsk/c2t500601601020813Ed0s0 and that
path becomes unavailable (perhaps due to device renumber or failure),
won't the zpool be unavailable?  Whereas a zpool on /dev/dsk/emcpower0a
will automagically handle the situation (assuming the 2nd path is
available)?

My thought would be that perhaps zpool import is confused seeing the
same zpool information on all three devices, but I concede this is a
relatively uneducated guess.
--
Sean

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 4:11 PM
To: Manoj Joseph
Cc: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath






[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:

> Peter Tribble wrote:
>
> > I've not got that far. During an import, ZFS just pokes around - 
> > there doesn't seem to be an explicit way to tell it which particular

> > devices or SAN paths to use.
>
> You can't tell it which devices to use in a straightforward manner. 
> But you can tell it which directories to scan.
>
> zpool import [-d dir]
>
> By default, it scans /dev/dsk.
>
> Does truss of zfs import show the powerrpath devices being opened and 
> read from?


AFAIK powerpath does not really need to use the powerpath pseudo devices
-- they are just there for convenience.  I would expect the drives to be
readable from either the c1 devices or emc*.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Wade . Stuart





[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:

> Peter Tribble wrote:
>
> > I've not got that far. During an import, ZFS just pokes around - there
> > doesn't seem to be an explicit way to tell it which particular devices
> > or SAN paths to use.
>
> You can't tell it which devices to use in a straightforward manner. But
> you can tell it which directories to scan.
>
> zpool import [-d dir]
>
> By default, it scans /dev/dsk.
>
> Does truss of zfs import show the powerrpath devices being opened and
> read from?


AFAIK powerpath does not really need to use the powerpath pseudo devices --
they are just there for convenience.  I would expect the drives to be
readable from either the c1 devices or emc*.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Manoj Joseph
Peter Tribble wrote:

> I've not got that far. During an import, ZFS just pokes around - there
> doesn't seem to be an explicit way to tell it which particular devices
> or SAN paths to use.

You can't tell it which devices to use in a straightforward manner. But 
you can tell it which directories to scan.

zpool import [-d dir]

By default, it scans /dev/dsk.

Does truss of zfs import show the powerrpath devices being opened and 
read from?

Regards,
Manoj
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
> I wonder what kind of card Peter's using and if there is a potential
> linkage there.  We've got the Sun branded Emulux cards in our sparcs.  I
> also wonder if Peter were able to allocate an additional LUN to his
> system whether or not he'd be able to create a pool on that new LUN.

On a different continent and I didn't buy it. Shows up as lpfc (is
that Emulex?). I'm not sure that's related - I can see the LUNs
and devices, it's just that zfs isn't happy.

(I still have this feeling that powerpath is doing something slightly
differently that zfs doesn't expect.)

Thanks,

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, Brian Wilson <[EMAIL PROTECTED]> wrote:
> Hm.  How many devices/LUNS can the server see?  I don't know how
> import finds the pools on the disk, but it sounds like it's not happy
> somehow.  Is there any possibility it's seeing a Clariion mirror copy
> of the disks in the pool as well?

I don't think it's that. As far as I can tell it can see exactly the LUNs
(2 of them - via 2 controllers and 2 HBAs, hence 8 native devices)
it's supposed to. And they appear to have exactly the right data on
them. And it can even detect part of the pool on each LUN - it just
fails to open one of them.

Thanks,

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Brian Wilson



On Jul 13, 2007, at 1:15 PM, Alderman, Sean wrote:


I wonder what kind of card Peter's using and if there is a potential
linkage there.  We've got the Sun branded Emulux cards in our  
sparcs.  I

also wonder if Peter were able to allocate an additional LUN to his
system whether or not he'd be able to create a pool on that new LUN.

I'm not sure why exactly they were chosen over the qlogic, some of our
admins swear by the qlogic cards, others have have had bad experiences
with the qlogic cards not allowing for persistent binding on some
configurations, but from my perspective being mostly a SAN noob  
it's all

hearsay.


--
Sean M. Alderman
513.204.2704

-Original Message-
From: Brian Wilson [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 1:58 PM
To: Alderman, Sean
Cc: Peter Tribble; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath

Hmm.  Odd.  I've got PowerPath working fine with ZFS with both  
Symmetrix

and Clariion back ends.
PowerPath Version is 4.5.0, running on leadville qlogic drivers.
Sparc hardware.  (if it matters)

I ran one our test databases on ZFS on the DMX via PowerPath for a
couple months until we switched off of it because of the 'bogus memory
usage' statistics problem.  We still use it on a server we use for  
logs

processing and retention that uses the Clariion as a back end.

cheers,
Brian


On Jul 13, 2007, at 11:08 AM, Alderman, Sean wrote:


There was a Sun Forums post that I referenced in that other thread
that mentioned something about mpxio working but powerpath not
working.  Of course I don't know how valid those statements are/were,
and I don't recall much detail given.


--
Sean

-Original Message-
From: Peter Tribble [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 11:53 AM
To: Alderman, Sean
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath

On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:

You wouldn't happen to be running this on a SPARC would you?


That I would.

I started a thread last week regarding CLARiiON+ZFS+SPARC = core  
dump



when creating a zpool.  I filed a bug report, though it doesn't
appear



to be in the database (not sure if that means it was rejected or I
didn't submit it correctly).


I'm not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath
seems to have "issues".


Also, I was using the powerpath psuedo device not the WWN though.


I've not got that far. During an import, ZFS just pokes around -  
there


doesn't seem to be an explicit way to tell it which particular  
devices



or SAN paths to use.



Hm.  How many devices/LUNS can the server see?  I don't know how  
import finds the pools on the disk, but it sounds like it's not happy  
somehow.  Is there any possibility it's seeing a Clariion mirror copy  
of the disks in the pool as well?


Just a couple thoughts.
Brian
 
---

Brian Wilson, Sun SE, UW-Madison DoIT
Room 3162 CS&S   608-263-8047
bfwilson(a)doit.wisc.edu
'I try to save a life a day. Usually it's my own.' - John Crichton
 
---




--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread eric kustarz

On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote:

> Hmm.  Odd.  I've got PowerPath working fine with ZFS with both  
> Symmetrix and Clariion back ends.
> PowerPath Version is 4.5.0, running on leadville qlogic drivers.   
> Sparc hardware.  (if it matters)
>
> I ran one our test databases on ZFS on the DMX via PowerPath for a  
> couple months until we switched off of it because of the 'bogus  
> memory usage' statistics problem.  We still use it on a server we  
> use for logs processing and retention that uses the Clariion as a  
> back end.
>

hey Brian,

Out of curiosity, what does 'bogus memory usage' refer to?

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
I wonder what kind of card Peter's using and if there is a potential
linkage there.  We've got the Sun branded Emulux cards in our sparcs.  I
also wonder if Peter were able to allocate an additional LUN to his
system whether or not he'd be able to create a pool on that new LUN.

I'm not sure why exactly they were chosen over the qlogic, some of our
admins swear by the qlogic cards, others have have had bad experiences
with the qlogic cards not allowing for persistent binding on some
configurations, but from my perspective being mostly a SAN noob it's all
hearsay. 


--
Sean M. Alderman
513.204.2704

-Original Message-
From: Brian Wilson [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 13, 2007 1:58 PM
To: Alderman, Sean
Cc: Peter Tribble; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath

Hmm.  Odd.  I've got PowerPath working fine with ZFS with both Symmetrix
and Clariion back ends.
PowerPath Version is 4.5.0, running on leadville qlogic drivers.   
Sparc hardware.  (if it matters)

I ran one our test databases on ZFS on the DMX via PowerPath for a
couple months until we switched off of it because of the 'bogus memory
usage' statistics problem.  We still use it on a server we use for logs
processing and retention that uses the Clariion as a back end.

cheers,
Brian


On Jul 13, 2007, at 11:08 AM, Alderman, Sean wrote:

> There was a Sun Forums post that I referenced in that other thread 
> that mentioned something about mpxio working but powerpath not 
> working.  Of course I don't know how valid those statements are/were, 
> and I don't recall much detail given.
>
>
> --
> Sean
>
> -Original Message-
> From: Peter Tribble [mailto:[EMAIL PROTECTED]
> Sent: Friday, July 13, 2007 11:53 AM
> To: Alderman, Sean
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS and powerpath
>
> On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
>> You wouldn't happen to be running this on a SPARC would you?
>
> That I would.
>
>> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump

>> when creating a zpool.  I filed a bug report, though it doesn't 
>> appear
>
>> to be in the database (not sure if that means it was rejected or I 
>> didn't submit it correctly).
>
> I'm not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath 
> seems to have "issues".
>
>> Also, I was using the powerpath psuedo device not the WWN though.
>
> I've not got that far. During an import, ZFS just pokes around - there

> doesn't seem to be an explicit way to tell it which particular devices

> or SAN paths to use.
>
> --
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Brian Wilson
Hmm.  Odd.  I've got PowerPath working fine with ZFS with both  
Symmetrix and Clariion back ends.
PowerPath Version is 4.5.0, running on leadville qlogic drivers.   
Sparc hardware.  (if it matters)


I ran one our test databases on ZFS on the DMX via PowerPath for a  
couple months until we switched off of it because of the 'bogus  
memory usage' statistics problem.  We still use it on a server we use  
for logs processing and retention that uses the Clariion as a back end.


cheers,
Brian


On Jul 13, 2007, at 11:08 AM, Alderman, Sean wrote:

There was a Sun Forums post that I referenced in that other thread  
that

mentioned something about mpxio working but powerpath not working.  Of
course I don't know how valid those statements are/were, and I don't
recall much detail given.


--
Sean

-Original Message-
From: Peter Tribble [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 11:53 AM
To: Alderman, Sean
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath

On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:

You wouldn't happen to be running this on a SPARC would you?


That I would.


I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump
when creating a zpool.  I filed a bug report, though it doesn't  
appear



to be in the database (not sure if that means it was rejected or I
didn't submit it correctly).


I'm not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath  
seems

to have "issues".


Also, I was using the powerpath psuedo device not the WWN though.


I've not got that far. During an import, ZFS just pokes around - there
doesn't seem to be an explicit way to tell it which particular devices
or SAN paths to use.

--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
There was a Sun Forums post that I referenced in that other thread that
mentioned something about mpxio working but powerpath not working.  Of
course I don't know how valid those statements are/were, and I don't
recall much detail given.


--
Sean

-Original Message-
From: Peter Tribble [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 13, 2007 11:53 AM
To: Alderman, Sean
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath

On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
> You wouldn't happen to be running this on a SPARC would you?

That I would.

> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump 
> when creating a zpool.  I filed a bug report, though it doesn't appear

> to be in the database (not sure if that means it was rejected or I 
> didn't submit it correctly).

I'm not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath seems
to have "issues".

> Also, I was using the powerpath psuedo device not the WWN though.

I've not got that far. During an import, ZFS just pokes around - there
doesn't seem to be an explicit way to tell it which particular devices
or SAN paths to use.

--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
> You wouldn't happen to be running this on a SPARC would you?

That I would.

> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump
> when creating a zpool.  I filed a bug report, though it doesn't appear
> to be in the database (not sure if that means it was rejected or I
> didn't submit it correctly).

I'm not seeing it quite like that. ZFS+mpxio works, ZFS+powerpath
seems to have "issues".

> Also, I was using the powerpath psuedo device not the WWN though.

I've not got that far. During an import, ZFS just pokes around - there
doesn't seem to be an explicit way to tell it which particular devices
or SAN paths to use.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Alderman, Sean
You wouldn't happen to be running this on a SPARC would you? 

I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump
when creating a zpool.  I filed a bug report, though it doesn't appear
to be in the database (not sure if that means it was rejected or I
didn't submit it correctly).  

Also, I was using the powerpath psuedo device not the WWN though.  We
had planned on opening a ticket with Sun but our DBA's sufficiently put
the kybosh on using ZFS on their systems when they caught wind of my
problem, so basically I can no longer use that server to investigate the
issue, and unfortunately I do not have any other available sparcs with
SAN connectivity.

--
Sean

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Tribble
Sent: Friday, July 13, 2007 11:18 AM
To: [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZFS and powerpath

On 7/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Can you post a "powermt display dev=all", a zpool status and format 
> command?

Sure.

There are no pools to give status on because I can't import them.
For the others:

# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00043600837 []
Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] state=alive;
policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B

==
 Host ---   - Stor -   -- I/O Path -  --
Stats ---
###  HW PathI/O PathsInterf.   ModeState  Q-IOs
Errors

==
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601613060099Cd1s0 SP A1
active  alive  0  0
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601693060099Cd1s0 SP B1
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601603060099Cd1s0 SP A0
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601683060099Cd1s0 SP B0
active  alive  0  0

Pseudo name=emcpower1a
CLARiiON ID=APM00043600837 []
Logical device ID=600601600C4912004C5CFDFFB62BDA11 [LUN 0] state=alive;
policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A

==
 Host ---   - Stor -   -- I/O Path -  --
Stats ---
###  HW PathI/O PathsInterf.   ModeState  Q-IOs
Errors

==
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601613060099Cd0s0 SP A1
active  alive  0  0
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601693060099Cd0s0 SP B1
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601603060099Cd0s0 SP A0
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601683060099Cd0s0 SP B0
active  alive  0  0



AVAILABLE DISK SELECTIONS:
   0. c1t0d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c2t500601613060099Cd0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   3. c2t500601693060099Cd0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   4. c2t500601613060099Cd1 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
   5. c2t500601693060099Cd1 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
   6. c3t500601683060099Cd0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   7. c3t500601603060099Cd0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   8. c3t500601683060099Cd1 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
   9. c3t500601603060099Cd1 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
  10. emcpower0a 
  /pseudo/[EMAIL PROTECTED]
  11. emcpower1a 
  /pseudo/[EMAIL PROTECTED]

>
> [EMAIL PROTECTED] wrote on 07/13/2007 09:38:01 AM:
>
> > How much fun can you have with a simple thing like powerpath?
> >
> > Here's the story: I have a (remote) system with access to a couple 
> > of EMC LUNs. Originally, I set it up with mpxio and created a simple

> &

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
On 7/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Can you post a "powermt display dev=all", a zpool status and format
> command?

Sure.

There are no pools to give status on because I can't import them.
For the others:

# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00043600837 []
Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host ---   - Stor -   -- I/O Path -  -- Stats ---
###  HW PathI/O PathsInterf.   ModeState  Q-IOs Errors
==
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601613060099Cd1s0 SP A1
active  alive  0  0
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601693060099Cd1s0 SP B1
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601603060099Cd1s0 SP A0
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601683060099Cd1s0 SP B0
active  alive  0  0

Pseudo name=emcpower1a
CLARiiON ID=APM00043600837 []
Logical device ID=600601600C4912004C5CFDFFB62BDA11 [LUN 0]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
==
 Host ---   - Stor -   -- I/O Path -  -- Stats ---
###  HW PathI/O PathsInterf.   ModeState  Q-IOs Errors
==
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601613060099Cd0s0 SP A1
active  alive  0  0
3073 [EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601693060099Cd0s0 SP B1
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601603060099Cd0s0 SP A0
active  alive  0  0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601683060099Cd0s0 SP B0
active  alive  0  0



AVAILABLE DISK SELECTIONS:
   0. c1t0d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c2t500601613060099Cd0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   3. c2t500601693060099Cd0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   4. c2t500601613060099Cd1 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
   5. c2t500601693060099Cd1 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
   6. c3t500601683060099Cd0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   7. c3t500601603060099Cd0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
   8. c3t500601683060099Cd1 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
   9. c3t500601603060099Cd1 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
  10. emcpower0a 
  /pseudo/[EMAIL PROTECTED]
  11. emcpower1a 
  /pseudo/[EMAIL PROTECTED]

>
> [EMAIL PROTECTED] wrote on 07/13/2007 09:38:01 AM:
>
> > How much fun can you have with a simple thing like powerpath?
> >
> > Here's the story: I have a (remote) system with access to a couple
> > of EMC LUNs. Originally, I set it up with mpxio and created a simple
> > zpool containing the two LUNs.
> >
> > It's now been reconfigured to use powerpath instead of mpxio.
> >
> > My problem is that I can't import the pool. I get:
> >
> >   pool: ##
> > id: ###
> >  state: FAULTED
> > status: One or more devices are missing from the system.
> > action: The pool cannot be imported. Attach the missing
> > devices and try again.
> >see: http://www.sun.com/msg/ZFS-8000-3C
> > config:
> >
> > disk00   UNAVAIL   insufficient replicas
> >   c3t50060xxCd1  ONLINE
> >   c3t50060xxCd0  UNAVAIL   cannot open
> >
> > Now, it's working up to the point at which it's worked out that
> > the bits of the pool are in the right places. It just can't open
> > all the bits. Why is that?
> >
> > I notice that it's using the underlying cXtXdX device names
> > rather than the virtual emcpower{0,1} names. However, rather
> > more worrying is that if I try to create a new pool, then it correctly
> > fails if I use the cXtXdX device (wa

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Wade . Stuart





Can you post a "powermt display dev=all", a zpool status and format
command?



[EMAIL PROTECTED] wrote on 07/13/2007 09:38:01 AM:

> How much fun can you have with a simple thing like powerpath?
>
> Here's the story: I have a (remote) system with access to a couple
> of EMC LUNs. Originally, I set it up with mpxio and created a simple
> zpool containing the two LUNs.
>
> It's now been reconfigured to use powerpath instead of mpxio.
>
> My problem is that I can't import the pool. I get:
>
>   pool: ##
> id: ###
>  state: FAULTED
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
> devices and try again.
>see: http://www.sun.com/msg/ZFS-8000-3C
> config:
>
> disk00   UNAVAIL   insufficient replicas
>   c3t50060xxCd1  ONLINE
>   c3t50060xxCd0  UNAVAIL   cannot open
>
> Now, it's working up to the point at which it's worked out that
> the bits of the pool are in the right places. It just can't open
> all the bits. Why is that?
>
> I notice that it's using the underlying cXtXdX device names
> rather than the virtual emcpower{0,1} names. However, rather
> more worrying is that if I try to create a new pool, then it correctly
> fails if I use the cXtXdX device (warning me that it contains
> part of a pool) but if I go through the emcpower devices
> then I don't get a warning.
>
> (One other snippet - the cXtXdX device nodes look
> slightly odd, in that some of them look like the traditional
> SMI labelled nodes, while some are more in an EFI style
> with a device node for the disk.)
>
> Is there any way to fix this or are we going to have to
> start over?
>
> If we do start over, is powerpath going to behave itself
> or might this sort of issue bite us again in the future?
>
> Thanks for any help or suggestions from any
> powerpath experts.
>
> --
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and powerpath

2007-07-13 Thread Peter Tribble
How much fun can you have with a simple thing like powerpath?

Here's the story: I have a (remote) system with access to a couple
of EMC LUNs. Originally, I set it up with mpxio and created a simple
zpool containing the two LUNs.

It's now been reconfigured to use powerpath instead of mpxio.

My problem is that I can't import the pool. I get:

  pool: ##
id: ###
 state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

disk00   UNAVAIL   insufficient replicas
  c3t50060xxCd1  ONLINE
  c3t50060xxCd0  UNAVAIL   cannot open

Now, it's working up to the point at which it's worked out that
the bits of the pool are in the right places. It just can't open
all the bits. Why is that?

I notice that it's using the underlying cXtXdX device names
rather than the virtual emcpower{0,1} names. However, rather
more worrying is that if I try to create a new pool, then it correctly
fails if I use the cXtXdX device (warning me that it contains
part of a pool) but if I go through the emcpower devices
then I don't get a warning.

(One other snippet - the cXtXdX device nodes look
slightly odd, in that some of them look like the traditional
SMI labelled nodes, while some are more in an EFI style
with a device node for the disk.)

Is there any way to fix this or are we going to have to
start over?

If we do start over, is powerpath going to behave itself
or might this sort of issue bite us again in the future?

Thanks for any help or suggestions from any
powerpath experts.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss