You wouldn't happen to be running this on a SPARC would you? 

I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump
when creating a zpool.  I filed a bug report, though it doesn't appear
to be in the database (not sure if that means it was rejected or I
didn't submit it correctly).  

Also, I was using the powerpath psuedo device not the WWN though.  We
had planned on opening a ticket with Sun but our DBA's sufficiently put
the kybosh on using ZFS on their systems when they caught wind of my
problem, so basically I can no longer use that server to investigate the
issue, and unfortunately I do not have any other available sparcs with
SAN connectivity.

--
Sean

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Tribble
Sent: Friday, July 13, 2007 11:18 AM
To: [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZFS and powerpath

On 7/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Can you post a "powermt display dev=all", a zpool status and format 
> command?

Sure.

There are no pools to give status on because I can't import them.
For the others:

# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00043600837 [########]
Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] state=alive;
policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
========================================================================
======
---------------- Host ---------------   - Stor -   -- I/O Path -  --
Stats ---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs
Errors
========================================================================
======
3073 [EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601613060099Cd1s0 SP A1
active  alive      0      0
3073 [EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601693060099Cd1s0 SP B1
active  alive      0      0
3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601603060099Cd1s0 SP A0
active  alive      0      0
3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601683060099Cd1s0 SP B0
active  alive      0      0

Pseudo name=emcpower1a
CLARiiON ID=APM00043600837 [########]
Logical device ID=600601600C4912004C5CFDFFB62BDA11 [LUN 0] state=alive;
policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
========================================================================
======
---------------- Host ---------------   - Stor -   -- I/O Path -  --
Stats ---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs
Errors
========================================================================
======
3073 [EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601613060099Cd0s0 SP A1
active  alive      0      0
3073 [EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c2t500601693060099Cd0s0 SP B1
active  alive      0      0
3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601603060099Cd0s0 SP A0
active  alive      0      0
3072 [EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c3t500601683060099Cd0s0 SP B0
active  alive      0      0



AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       2. c2t500601613060099Cd0 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
       3. c2t500601693060099Cd0 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
       4. c2t500601613060099Cd1 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
       5. c2t500601693060099Cd1 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],600000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
       6. c3t500601683060099Cd0 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
       7. c3t500601603060099Cd0 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],0
       8. c3t500601683060099Cd1 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
       9. c3t500601603060099Cd1 <DGC-RAID 5-0219-500.00GB>
          /[EMAIL PROTECTED],700000/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1
      10. emcpower0a <DGC-RAID 5-0219-500.00GB>
          /pseudo/[EMAIL PROTECTED]
      11. emcpower1a <DGC-RAID 5-0219-500.00GB>
          /pseudo/[EMAIL PROTECTED]

>
> [EMAIL PROTECTED] wrote on 07/13/2007 09:38:01 AM:
>
> > How much fun can you have with a simple thing like powerpath?
> >
> > Here's the story: I have a (remote) system with access to a couple 
> > of EMC LUNs. Originally, I set it up with mpxio and created a simple

> > zpool containing the two LUNs.
> >
> > It's now been reconfigured to use powerpath instead of mpxio.
> >
> > My problem is that I can't import the pool. I get:
> >
> >   pool: ######
> >     id: ###################
> >  state: FAULTED
> > status: One or more devices are missing from the system.
> > action: The pool cannot be imported. Attach the missing
> >         devices and try again.
> >    see: http://www.sun.com/msg/ZFS-8000-3C
> > config:
> >
> >         disk00                   UNAVAIL   insufficient replicas
> >           c3t50060xxxxxxxxxxCd1  ONLINE
> >           c3t50060xxxxxxxxxxCd0  UNAVAIL   cannot open
> >
> > Now, it's working up to the point at which it's worked out that the 
> > bits of the pool are in the right places. It just can't open all the

> > bits. Why is that?
> >
> > I notice that it's using the underlying cXtXdX device names rather 
> > than the virtual emcpower{0,1} names. However, rather more worrying 
> > is that if I try to create a new pool, then it correctly fails if I 
> > use the cXtXdX device (warning me that it contains part of a pool) 
> > but if I go through the emcpower devices then I don't get a warning.
> >
> > (One other snippet - the cXtXdX device nodes look slightly odd, in 
> > that some of them look like the traditional SMI labelled nodes, 
> > while some are more in an EFI style with a device node for the 
> > disk.)
> >
> > Is there any way to fix this or are we going to have to start over?
> >
> > If we do start over, is powerpath going to behave itself or might 
> > this sort of issue bite us again in the future?
> >
> > Thanks for any help or suggestions from any powerpath experts.
> >
> > --
> > -Peter Tribble
> > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ 
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>


--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to