I thought others may find this useful. Turns out this was caused,
because a new EMC was attached to the fibre network, specifically
related to the following:
LUNZ has been implemented on CLARiiON arrays to make arrays visible to
the host OS and PowerPath when no LUNs are bound on that array. When
using a direct connect configuration and there is no Navisphere
Management station to talk directly to the array over IP, the LUNZ can
be used as a pathway for Navisphere CLI to send bind commands to the
array. LUNZ also makes arrays visible to the host OS and PowerPath when
the host's initiators have not yet `logged in' to the Storage Group
created for the host. Without LUNZ, there would be no device on the host
for Navisphere Agent to push the initiator record through to the array.
This is mandatory for the host to log in to the Storage Group. Once this
initiator push is done, the host will be displayed as an available host
to add to the Storage Group in Navisphere Manager (Navisphere Express).
In summary, this occurs because the arraycommpath setting of 1 creates a
virtual LUN 0 for communication with the storage system.
A way of getting rid of LUNZ is to set arraycommpath to a value of 0.
The arraycommpath option enables or disables a communication path from
the server to the storage system.
Aaron
On 3/16/2010 12:50 PM, Aaron Bliss wrote:
Hi all,
Working on an updated RedHat 5.4 64 server, I'm trying to figure out
how/why a block device was renamed from /dev/sdc to /dev/sdd. The device
is SAN storage connected via a qlogic fibre hba.
# ls -l /dev/sd*
brw-r----- 1 root disk 8, 0 Mar 16 12:38 /dev/sda
brw-r----- 1 root disk 8, 1 Mar 16 12:38 /dev/sda1
brw-r----- 1 root disk 8, 2 Mar 16 12:38 /dev/sda2
brw-r----- 1 root disk 8, 3 Mar 16 12:38 /dev/sda3
brw-r----- 1 root disk 8, 4 Mar 16 12:38 /dev/sda4
brw-r----- 1 root disk 8, 5 Mar 16 12:38 /dev/sda5
brw-r----- 1 root disk 8, 6 Mar 16 12:38 /dev/sda6
brw-r----- 1 root disk 8, 7 Mar 16 12:38 /dev/sda7
brw-r----- 1 root disk 8, 8 Mar 16 12:38 /dev/sda8
brw-r----- 1 root disk 8, 16 Mar 16 12:38 /dev/sdb
brw-r----- 1 root disk 8, 17 Mar 16 12:38 /dev/sdb1
brw-r----- 1 root disk 8, 32 Mar 16 12:38 /dev/sdc
brw-r----- 1 root disk 8, 48 Mar 16 12:38 /dev/sdd
brw-r----- 1 root disk 8, 49 Mar 16 12:38 /dev/sdd1
When this happened, the server was also logging the following error,
which is also logged several times on bootup:
kernel: end_request: I/O error, dev sdc, sector 0
The lvm tools also now show /dev/sdd1 as part of a volume group that
/dev/sdc1 was. I can browse the partition where the logical volume is
mounted and all data seems to be intact. Running the following after the
system is up will remove the /dev/sdc block device, however it reappears
on each successive start up:
echo 1 > /sys/block/sdc/device/delete
I booted the box into the rescue environment by booting from cd 1 and
the rescue environment does not show any reference to /dev/sdc but does
/dev/sdd.
The lun was never unpresented and then represented or anything like
that, as I'm the only one in our environment that manages the SAN.
Aaron
_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list