Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-02 Thread Andrew Gabriel




James C. McPherson wrote:
On 
2/06/10 03:11 PM, Fred Liu wrote:
  
  Fix some typos.


#


In fact, there is no problem for MPxIO name in technology.

It only matters for storage admins to remember the name.

  
  
You are correct.
  
  
  I think there is no way to give short aliases
to these long tedious MPxIO name.

  
  
You are correct that we don't have aliases. However, I do not
  
agree that the naming is tedious. It gives you certainty about
  
the actual device that you are dealing with, without having
  
to worry about whether you've cabled it right.
  


Might want to add a call record to

    CR 6901193 Need a command to list current usage of disks,
partitions, and slices

which includes a request for vanity naming for disks.

(Actually, vanity naming for disks should probably be brought out into
a separate RFE.)

-- 

Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | Minley Road | Camberley | GU17 9QG | United Kingdom

ORACLE Corporation UK Ltd is a
company incorporated in England & Wales | Company Reg. No. 1782505
| Reg. office: Oracle Parkway, Thames Valley Park, Reading RG6 1RA


Oracle is committed to developing practices and products that
help protect the environment




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-02 Thread James C. McPherson

On  2/06/10 03:11 PM, Fred Liu wrote:

Fix some typos.

#

In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.


You are correct.


I think there is no way to give short aliases to these long tedious MPxIO name.


You are correct that we don't have aliases. However, I do not
agree that the naming is tedious. It gives you certainty about
the actual device that you are dealing with, without having
to worry about whether you've cabled it right.



And I just have only one HBA card, so I don't need multipath indeed.


For SAS and FC-attached devices, we are moving (however slowly) towards
having MPxIO on all the time.

Please don't assume that turning on MPxIO requires you to have
multiple ports and/or HBAs - for the addressing scheme at least,
it does not. Failover that's another matter.



The simple name -- cxtxdx will be much more easier.


That naming system is rooted in parallel scsi times. It is not
appropriate for SAS and FC environments.


Furthermore, my ultimate goal is to map the disk in MPxIO path

> to the actual physical slot

position. And If there is a broken HDD, I can easily know to

> replace which one.

BTW, the "luxadm led_blink" may not work in the commodity hardware

> and only works in Sun's proprietary disk array.


I think it is a common situation  for storage admins.

**How do you replace the broken HDDs in your best practice?**


If you are running build 126 or later, then you can take advantage
of the behaviour that was added to cfgadm(1m):



$ cfgadm -lav c3 c4
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c3::dsk/c3t2d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t2d0
c3::dsk/c3t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t3d0
c3::dsk/c3t4d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t4d0
c3::dsk/c3t6d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t6d0

c4 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
c4::dsk/c4t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t3d0
c4::dsk/c4t7d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t7d0




While the above is a bit unwieldy to read in an email, it does
show you the following things:

(0) I have SAS and SATA disks
(1) I have MPxIO turned on
(2) the MPxIO-capable devices are listed with both their "client"
or scsi_vhci path, and their traditional cXtYdZ name



$ cfgadm -lav c3::0,0 c4::5,0 c4::6,0
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0



No need to use luxadm.



James C. McPherson
--
Senior Software Enginee

Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-02 Thread James C. McPherson

On  2/06/10 03:11 PM, Fred Liu wrote:

Fix some typos.

#

In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.


You are correct.


I think there is no way to give short aliases to these long tedious MPxIO name.


You are correct that we don't have aliases. However, I do not
agree that the naming is tedious. It gives you certainty about
the actual device that you are dealing with, without having
to worry about whether you've cabled it right.



And I just have only one HBA card, so I don't need multipath indeed.


For SAS and FC-attached devices, we are moving (however slowly) towards
having MPxIO on all the time.

Please don't assume that turning on MPxIO requires you to have
multiple ports and/or HBAs - for the addressing scheme at least,
it does not. Failover that's another matter.



The simple name -- cxtxdx will be much more easier.


That naming system is rooted in parallel scsi times. It is not
appropriate for SAS and FC environments.


Furthermore, my ultimate goal is to map the disk in MPxIO path

> to the actual physical slot

position. And If there is a broken HDD, I can easily know to

> replace which one.

BTW, the "luxadm led_blink" may not work in the commodity hardware

> and only works in Sun's proprietary disk array.


I think it is a common situation  for storage admins.

**How do you replace the broken HDDs in your best practice?**


If you are running build 126 or later, then you can take advantage
of the behaviour that was added to cfgadm(1m):



$ cfgadm -lav c3 c4
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c3::dsk/c3t2d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t2d0
c3::dsk/c3t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t3d0
c3::dsk/c3t4d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t4d0
c3::dsk/c3t6d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t6d0

c4 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
c4::dsk/c4t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t3d0
c4::dsk/c4t7d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t7d0




While the above is a bit unwieldy to read in an email, it does
show you the following things:

(0) I have SAS and SATA disks
(1) I have MPxIO turned on
(2) the MPxIO-capable devices are listed with both their "client"
or scsi_vhci path, and their traditional cXtYdZ name



$ cfgadm -lav c3::0,0 c4::5,0 c4::6,0
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0



No need to use luxadm.



James C. McPherson
--
Senior Software Enginee

Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-01 Thread Fred Liu
Fix some typos.

#

In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.
I think there is no way to give short aliases to these long tedious MPxIO name.
And I just have only one HBA card, so I don't need multipath indeed.
The simple name -- cxtxdx will be much more easier.
Furthermore, my ultimate goal is to map the disk in MPxIO path to the actual 
physical slot
position. And If there is a broken HDD, I can easily know to replace which one.
BTW, the "luxadm led_blink" may not work in the commodity hardware and only 
works in Sun's proprietary 
disk array.

I think it is a common situation  for storage admins.

**How do you replace the broken HDDs in your best practice?**

Thanks.

Fred

-Original Message-
From: James C. McPherson [mailto:j...@opensolaris.org] 
Sent: 星期三, 六月 02, 2010 10:27
To: Fred Liu
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris 
installation?

On  2/06/10 12:01 PM, Fred Liu wrote:
> Yes. But the output of zpool commands still uses MPxIO naming convention
 > and format command cannot find any disks.

_But_ ?

What is the problem with ZFS using the device naming system
that the system provides it with?


Do you mean that you cannot see any plain old targets, or
that no disk devices of any sort show up in your host when
you are installing?

What is your actual problem, and why do you think that
turning off MPxIO will solve it?


James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-01 Thread Fred Liu
In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.
I think there is no way to give short aliases to these long tedious MPxIO name.
And I just have only one HBA card, so I don't need multipath indeed.
The simple name -- cxtxdx will be much more easier.
Furthermore, my ultimate goal is to map the disk in MPxIO path to the actual 
physical slot
position. And If there is a broken HDD, I can easily know to replace which one.
BTW, the "luxadm led_blink" may not work in the commodity hardware and only 
works in Sun's proprietary 
disk array.

I think it is a common storage for storage admins.
How do you replace the broken HDDs in your best practice.

Thanks.

Fred

-Original Message-
From: James C. McPherson [mailto:j...@opensolaris.org] 
Sent: 星期三, 六月 02, 2010 10:27
To: Fred Liu
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris 
installation?

On  2/06/10 12:01 PM, Fred Liu wrote:
> Yes. But the output of zpool commands still uses MPxIO naming convention
 > and format command cannot find any disks.

_But_ ?

What is the problem with ZFS using the device naming system
that the system provides it with?


Do you mean that you cannot see any plain old targets, or
that no disk devices of any sort show up in your host when
you are installing?

What is your actual problem, and why do you think that
turning off MPxIO will solve it?


James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-01 Thread James C. McPherson

On  2/06/10 12:01 PM, Fred Liu wrote:

Yes. But the output of zpool commands still uses MPxIO naming convention

> and format command cannot find any disks.

_But_ ?

What is the problem with ZFS using the device naming system
that the system provides it with?


Do you mean that you cannot see any plain old targets, or
that no disk devices of any sort show up in your host when
you are installing?

What is your actual problem, and why do you think that
turning off MPxIO will solve it?


James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-01 Thread Fred Liu
Yes. But the output of zpool commands still uses MPxIO naming convention and 
format command cannot find any disks.

Thanks.

Fred

-Original Message-
From: James C. McPherson [mailto:j...@opensolaris.org] 
Sent: 星期三, 六月 02, 2010 9:58
To: Fred Liu
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris 
installation?

On  2/06/10 11:39 AM, Fred Liu wrote:
> Thanks.

No.


If you must disable MPxIO, then you do so after installation,
using the stmsboot command.



James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-01 Thread James C. McPherson

On  2/06/10 11:39 AM, Fred Liu wrote:

Thanks.


No.


If you must disable MPxIO, then you do so after installation,
using the stmsboot command.



James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-01 Thread Fred Liu
Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss