Hi Karl,

Welcome to Solaris/ZFS land ...

ZFS administration is pretty easy but our device administration
is more difficult.

I'll probably bungle this response because I don't have similar
hardware and I hope some expert will correct me.

I think you will have to experiment with various forms of cfgadm.
Also look at the cfgadm_fp man page.

See the examples below on a V210 with a 3510 array.

Cindy

# cfgadm -al | grep 226000c0ffa001ab
c1::226000c0ffa001ab     disk     connected    configured   unknown

# cfgadm -al -o show_SCSI_LUN
Ap_Id                     Type       Receptacle   Occupant     Condition
c1                        fc-fabric  connected    configured   unknown
c1::210000e08b1ad8c8      unknown    connected    unconfigured unknown
c1::210100e08b3fbb64      unknown    connected    unconfigured unknown
c1::226000c0ffa001ab,0    disk       connected    configured   unknown
c1::226000c0ffa001ab,1    disk       connected    configured   unknown
c1::226000c0ffa001ab,2    disk       connected    configured   unknown

# cfgadm -o show_FCP_dev -al
Ap_Id                    Type        Receptacle   Occupant     Condition
c1                      fc-fabric    connected    configured   unknown
c1::210000e08b1ad8c8    unknown      connected    unconfigured unknown
c1::210100e08b3fbb64    unknown      connected    unconfigured unknown
c1::226000c0ffa001ab,0  disk         connected    configured   unknown
c1::226000c0ffa001ab,1  disk         connected    configured   unknown
c1::226000c0ffa001ab,2  disk         connected    configured   unknown





On 11/03/09 14:10, Karl Katzke wrote:
I am a bit of a Solaris newbie. I have a brand spankin' new Solaris 10u8 machine (x4250) 
that is running an attached J4400 and some internal drives. We're using multipathed SAS 
I/O (enabled via stmsboot), so the device mount points have been moved off from their 
"normal" c0t5d0 to long strings -- in the case of c0t5d0, it's now 
/dev/rdsk/c6t5000CCA00A274EDCd0. (I can see the cross-referenced devices with stmsboot 
-L.)

Normally, when replacing a disk on a Solaris system, I would run cfgadm -c 
unconfigure c0::dsk/c0t5d0. However, cfgadm -l does not list c6, nor does it 
list any disks. In fact, running cfgadm against the places where I think things 
are supposed to live gets me the following:

bash# cfgadm -l /dev/rdsk/c0t5d0
Ap_Id Type Receptacle Occupant Condition
/dev/rdsk/c0t5d0: No matching library found

bash# cfgadm -l /dev/rdsk/c6t5000CCA00A274EDCd0
cfgadm: Attachment point not found

bash# cfgadm -l /dev/dsk/c6t5000CCA00A274EDCd0
Ap_Id                          Type         Receptacle   Occupant     Condition
/dev/dsk/c6t5000CCA00A274EDCd0: No matching library found

bash# cfgadm -l c6t5000CCA00A274EDCd0
Ap_Id Type Receptacle Occupant Condition
c6t5000CCA00A274EDCd0: No matching library found

I ran devfsadm -C -v and it removed all of the old attachment points for the /dev/dsk/c0t5d0 devices and created some for the c6 devices. Running cfgadm -al shows a c0, c4, and c5 -- these correspond to the actual controllers, but no devices are attached to the controllers. I found an old email on this list about MPxIO that said the solution was basically to yank the physical device after making sure that no I/O was happening to it. While this worked and allowed us to return the device to service as a spare in the zpool it inhabits, more concerning was what happened when we ran mpathadm list lu after yanking the device and returning it to service: ------
bash# mpathadm list lu
/dev/rdsk/c6t5000CCA00A2A9398d0s2
Total Path Count: 1
Operational Path Count: 1
/dev/rdsk/c6t5000CCA00A29EE2Cd0s2
Total Path Count: 1
Operational Path Count: 1
/dev/rdsk/c6t5000CCA00A2BDBFCd0s2
Total Path Count: 1
Operational Path Count: 1
/dev/rdsk/c6t5000CCA00A2A8E68d0s2
Total Path Count: 1
Operational Path Count: 1
/dev/rdsk/c6t5000CCA00A0537ECd0s2
Total Path Count: 1
Operational Path Count: 1
mpathadm: Error: Unable to get configuration information.
mpathadm: Unable to complete operation

(Side note: Some of the disks are single path via an internal controller, and some of them are multi path in the J4400 via two external controllers.) A reboot fixed the 'issue' with mpathadm and it now outputs complete data. -------- So -- how do I administer and remove physical devices that are in multipath-managed controllers on Solaris 10u8 without breaking multipath and causing configuration changes that interfere with the services and devices attached via mpathadm and the other voodoo and black magic inside? I can't seem to find this documented anywhere, even if the instructions to enable multipathing with stmsboot -e were quite complete and worked well!
Thanks,
Karl Katzke



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to