[CentOS] multipath recipe for an enclosure ?

2018-06-28 Thread lejeczek via CentOS

hi guys,

In hope that some experts roam around I post this one question - how do 
you multipath disks(all disks) that sit in one specific SAS enclosure? 
Blacklist everything else.


And I'm hoping for something like "globing", so you do not want to go 
through it on by single disk/wwin basis.


some experts?(or maybe even not, it could be that I do not get it)

many thanks, L.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] multipath

2017-10-03 Thread Tony Schreiner
I have inherited a system set up with multipath, which is not something I
have seen before so I could use some advice


The system is a Dell R420 with 2 LSI SAS2008 HBAs, 4 internal disks, and a
MD3200 storage array attached via SAS cables. Oh and CentOS 6

lsblk shows the following:
NAMEMAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sdd   8:48   0 931.5G  0 disk
├─sdd18:49   0 931.5G  0 part
└─mpathi (dm-0) 253:00 931.5G  0 mpath
sdc   8:32   0 931.5G  0 disk
├─sdc18:33   0  1000M  0 part  /boot
├─sdc28:34   0  31.3G  0 part  /
├─sdc38:35   0  31.3G  0 part  [SWAP]
├─sdc48:36   0 1K  0 part
├─sdc58:37   0 184.5G  0 part  /home
└─sdc68:38   0 683.6G  0 part  /data01
sde   8:64   0 931.5G  0 disk
├─sde18:65   0 931.5G  0 part
└─mpathj (dm-1) 253:10 931.5G  0 mpath
  └─mpathjp1 (dm-8) 253:80 931.5G  0 part  /data02
sdf   8:80   0 931.5G  0 disk
├─sdf18:81   0 931.5G  0 part
└─mpathk (dm-2) 253:20 931.5G  0 mpath
sda   8:0051T  0 disk
├─sda18:10  12.8T  0 part
├─sda28:20  12.8T  0 part
├─sda38:30  12.8T  0 part
├─sda48:40  12.8T  0 part
└─mpathe (dm-3) 253:3051T  0 mpath
  ├─mpathep1 (dm-4) 253:40  12.8T  0 part  /SAN101
  ├─mpathep2 (dm-5) 253:50  12.8T  0 part  /SAN102
  ├─mpathep3 (dm-6) 253:60  12.8T  0 part  /SAN103
  └─mpathep4 (dm-7) 253:70  12.8T  0 part  /SAN104
sdb   8:16   051T  0 disk
├─sdb18:17   0  12.8T  0 part
├─sdb28:18   0  12.8T  0 part
├─sdb38:19   0  12.8T  0 part
├─sdb48:20   0  12.8T  0 part
└─mpathe (dm-3) 253:3051T  0 mpath
  ├─mpathep1 (dm-4) 253:40  12.8T  0 part  /SAN101
  ├─mpathep2 (dm-5) 253:50  12.8T  0 part  /SAN102
  ├─mpathep3 (dm-6) 253:60  12.8T  0 part  /SAN103
  └─mpathep4 (dm-7) 253:70  12.8T  0 part  /SAN104

sda and sdb are two views of the unit on the MD, sdc is the boot and root
disk, one of the disks sde (mpathj) has a mounted file system, the
remaining two do not.
here is df:
Filesystem   Type   Size  Used Avail Use% Mounted on
/dev/sdc2ext431G   26G  4.0G  87% /
tmpfstmpfs   16G   92K   16G   1% /dev/shm
/dev/sdc1ext4   969M  127M  793M  14% /boot
/dev/sdc6ext4   673G  242G  398G  38% /data01
/dev/mapper/mpathjp1 ext4   917G  196G  676G  23% /data02
/dev/sdc5ext4   182G  169G  3.9G  98% /home
/dev/mapper/mpathep1 ext413T   11T 1005G  92% /SAN101
/dev/mapper/mpathep2 ext413T  5.0T  7.0T  42% /SAN102
/dev/mapper/mpathep3 ext413T  4.9T  7.1T  42% /SAN103
/dev/mapper/mpathep4 ext413T  8.2T  3.8T  69% /SAN104


So a few questions:

Is there any value in using multipath on the 4 internal drives; do they
actually have multiple data paths?

Why does multipath not create a mpath device for the system disk sdc, there
is no blacklist in the mulitpath.conf file.
If I do a dry-run mulitpath -d command, it sets up mpathh on sdc, but
apparently not on boot. Why not?
I should say that the system has not been rebooted since well before I took
it over, it has been up for over 400 days.

the sdd and sdf disks have a single partition, but apparently no mpath
device for the partition, how to I create one?


Thanks in advance
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath show config different in CentOS 7?

2017-01-31 Thread Alexander Dalloz

Am 31.01.2017 um 22:51 schrieb Gianluca Cecchi:

Hello,
suppose I want to use a special configuration for my IBM/1814 storage array
luns, then I put something like this in multipath.conf

devices {
device {
vendor "IBM"
product "^1814"
product_blacklist "Universal Xport"
path_grouping_policy "group_by_prio"
path_checker "rdac"
features "0"
hardware_handler "1 rdac"
prio "rdac"
failback immediate
rr_weight "uniform"
no_path_retry "12"
}
}

In CentOS 6.x when you restart multipathd or restart server, using

multipathd -k
multipathd> show config
multipathd> exit


That prints out the build-in defaults. See

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/pdf/DM_Multipath/Red_Hat_Enterprise_Linux-7-DM_Multipath-en-US.pdf

chapter 4.


in output you see your particular configuration instead than the default
one for this particular device.
In CentOS 7.3 instead I see it two times, the first one with the default
built-in values (eg "no_path_retry   fail") and at the end the customized
one.
So it is not clear what is the actual configuration that device mapper
multipath is using...
The last wins?
Is this expected behaviour? In that case what command can I use to
crosscheck it (apart from real testing that is anyway necessary to verify
too)?
Eg in CentOS 7.3 I'm using device-mapper-multipath-0.4.9-99.el7_3.1.x86_64
while on CentOS 6.8 I'm using device-mapper-multipath-0.4.9-93.el6.x86_64

Thanks in advance,
Gianluca


To configure a custom setup and enable it please see chapter 3 of the 
RHEL documentation about device-mapper multipath.


Regards

Alexander




___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] multipath show config different in CentOS 7?

2017-01-31 Thread Gianluca Cecchi
Hello,
suppose I want to use a special configuration for my IBM/1814 storage array
luns, then I put something like this in multipath.conf

devices {
device {
vendor "IBM"
product "^1814"
product_blacklist "Universal Xport"
path_grouping_policy "group_by_prio"
path_checker "rdac"
features "0"
hardware_handler "1 rdac"
prio "rdac"
failback immediate
rr_weight "uniform"
no_path_retry "12"
}
}

In CentOS 6.x when you restart multipathd or restart server, using

multipathd -k
multipathd> show config
multipathd> exit

in output you see your particular configuration instead than the default
one for this particular device.
In CentOS 7.3 instead I see it two times, the first one with the default
built-in values (eg "no_path_retry   fail") and at the end the customized
one.
So it is not clear what is the actual configuration that device mapper
multipath is using...
The last wins?
Is this expected behaviour? In that case what command can I use to
crosscheck it (apart from real testing that is anyway necessary to verify
too)?
Eg in CentOS 7.3 I'm using device-mapper-multipath-0.4.9-99.el7_3.1.x86_64
while on CentOS 6.8 I'm using device-mapper-multipath-0.4.9-93.el6.x86_64

Thanks in advance,
Gianluca
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Multipath w/ iscsi

2011-08-21 Thread Joseph L. Casale
I have several CentOS 6 boxes that mount iscsi based luns and use mpath.
They all had problems shutting down as a result of unused maps not getting
flushed as the system halted.

After examining the init scripts, netfs, iscsi and multipathd all had the 
correct
order but mpath failed to flush these maps and the system waited indefinitely.

In the meantime I hacked this by adding a `/sbin/multipath -F` at the end of the
stop clause in the init script.

I seriously doubt this problems exists w/o being the result of my error in 
configuration.
Anyone know what the required mpath config might be in this scenario where
the block devices all disappear once netfs unmounts and iscsi stops?

Thanks!
jlc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipath w/ iscsi

2011-08-21 Thread Alexander Dalloz
Am 21.08.2011 21:49, schrieb Joseph L. Casale:
 I have several CentOS 6 boxes that mount iscsi based luns and use mpath.
 They all had problems shutting down as a result of unused maps not getting
 flushed as the system halted.
 
 After examining the init scripts, netfs, iscsi and multipathd all had the 
 correct
 order but mpath failed to flush these maps and the system waited indefinitely.

That sounds as if the paths (SCSI block devices) where removed before
multipath had a chance to flush its map(s).

 In the meantime I hacked this by adding a `/sbin/multipath -F` at the end of 
 the
 stop clause in the init script.
 
 I seriously doubt this problems exists w/o being the result of my error in 
 configuration.
 Anyone know what the required mpath config might be in this scenario where
 the block devices all disappear once netfs unmounts and iscsi stops?

You are sure about the order of the service stops? If you stop iscsi and
remove the devices before multipath flushes the maps, you will end up in
the situation described.

1) umount
2) vgchange -an if LVM is used on LUNs
3) flush multipaths
4) stop iscsi

 Thanks!
 jlc

Alexander

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipath w/ iscsi

2011-08-21 Thread Joseph L. Casale
3) flush multipaths
4) stop iscsi

I guess that's the point, it seems the init script does not flush them out so
the module and any dependent dm mods stay active.

jlc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath troubleshoot

2010-09-21 Thread Paras pradhan
Reboot in to the latest kernel fixed this issue.

Paras.

On Fri, Sep 17, 2010 at 1:19 PM, Paras pradhan pradhanpa...@gmail.com wrote:
 Hi,
 My storage admin just assigned a Lun (fibre) to my server. Then re scanned 
 using

 echo 1  /sys/class/fc_host/host5/issue_lip

 echo 1  /sys/class/fc_host/host6/issue_lip

 I can see the scsi device using dmesg

 But mpath device are not created for this LUN


 Pleas see below. The last 4 should be active and I think this is the problem

 Kernel: 2.6.18-164.11.1.el5xen , EL 5.5
 --

 [r...@cvprd3 lvm]# multipathd -k
 multipathd show paths
 hcil        dev dev_t  pri dm_st   chk_st   next_check
 5:0:0:0     sdb 8:16   1   [active][ready]  XXX... 7/20
 5:0:0:1     sdc 8:32   1   [active][ready]  XXX... 7/20
 5:0:0:16384 sdd 8:48   1   [active][ready]  XXX... 7/20
 5:0:0:16385 sde 8:64   1   [active][ready]  XXX... 7/20
 5:0:0:32768 sdf 8:80   1   [active][ready]  XXX... 7/20
 5:0:0:32769 sdg 8:96   1   [active][ready]  XXX... 7/20
 5:0:0:49152 sdh 8:112  1   [active][ready]  XXX... 7/20
 5:0:0:49153 sdi 8:128  1   [active][ready]  XXX... 7/20
 5:0:0:2     sdj 8:144  1   [active][ready]  XXX... 7/20
 5:0:0:16386 sdk 8:160  1   [active][ready]  XXX... 7/20
 5:0:0:32770 sdl 8:176  1   [active][ready]  XXX... 7/20
 5:0:0:49154 sdm 8:192  1   [active][ready]  XXX... 7/20
 5:0:0:3     sdn 8:208  1   [active][ready]  XXX... 7/20
 5:0:0:16387 sdo 8:224  1   [active][ready]  XXX... 7/20
 5:0:0:32771 sdp 8:240  1   [active][ready]  XXX... 7/20
 5:0:0:49155 sdq 65:0   1   [active][ready]  XXX... 7/20
 5:0:0:4     sdr 65:16  1   [active][ready]  XXX... 7/20
 5:0:0:16388 sds 65:32  1   [active][ready]  XXX... 7/20
 5:0:0:32772 sdt 65:48  1   [active][ready]  XXX... 7/20
 5:0:0:49156 sdu 65:64  1   [active][ready]  XXX... 7/20
 5:0:0:5     sdv 65:80  0   [undef] [faulty] [orphan]
 5:0:0:16389 sdw 65:96  0   [undef] [faulty] [orphan]
 5:0:0:32773 sdx 65:112 0   [undef] [faulty] [orphan]
 5:0:0:49157 sdy 65:128 0   [undef] [faulty] [orphan]
 multipathd

 Thanks in Adv
 Paras.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] multipath troubleshoot

2010-09-17 Thread Paras pradhan
Hi,
My storage admin just assigned a Lun (fibre) to my server. Then re scanned using

echo 1  /sys/class/fc_host/host5/issue_lip

echo 1  /sys/class/fc_host/host6/issue_lip

I can see the scsi device using dmesg

But mpath device are not created for this LUN


Pleas see below. The last 4 should be active and I think this is the problem

Kernel: 2.6.18-164.11.1.el5xen , EL 5.5
--

[r...@cvprd3 lvm]# multipathd -k
multipathd show paths
hcildev dev_t  pri dm_st   chk_st   next_check
5:0:0:0 sdb 8:16   1   [active][ready]  XXX... 7/20
5:0:0:1 sdc 8:32   1   [active][ready]  XXX... 7/20
5:0:0:16384 sdd 8:48   1   [active][ready]  XXX... 7/20
5:0:0:16385 sde 8:64   1   [active][ready]  XXX... 7/20
5:0:0:32768 sdf 8:80   1   [active][ready]  XXX... 7/20
5:0:0:32769 sdg 8:96   1   [active][ready]  XXX... 7/20
5:0:0:49152 sdh 8:112  1   [active][ready]  XXX... 7/20
5:0:0:49153 sdi 8:128  1   [active][ready]  XXX... 7/20
5:0:0:2 sdj 8:144  1   [active][ready]  XXX... 7/20
5:0:0:16386 sdk 8:160  1   [active][ready]  XXX... 7/20
5:0:0:32770 sdl 8:176  1   [active][ready]  XXX... 7/20
5:0:0:49154 sdm 8:192  1   [active][ready]  XXX... 7/20
5:0:0:3 sdn 8:208  1   [active][ready]  XXX... 7/20
5:0:0:16387 sdo 8:224  1   [active][ready]  XXX... 7/20
5:0:0:32771 sdp 8:240  1   [active][ready]  XXX... 7/20
5:0:0:49155 sdq 65:0   1   [active][ready]  XXX... 7/20
5:0:0:4 sdr 65:16  1   [active][ready]  XXX... 7/20
5:0:0:16388 sds 65:32  1   [active][ready]  XXX... 7/20
5:0:0:32772 sdt 65:48  1   [active][ready]  XXX... 7/20
5:0:0:49156 sdu 65:64  1   [active][ready]  XXX... 7/20
5:0:0:5 sdv 65:80  0   [undef] [faulty] [orphan]
5:0:0:16389 sdw 65:96  0   [undef] [faulty] [orphan]
5:0:0:32773 sdx 65:112 0   [undef] [faulty] [orphan]
5:0:0:49157 sdy 65:128 0   [undef] [faulty] [orphan]
multipathd

Thanks in Adv
Paras.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipath and iSCSI Targets

2010-03-20 Thread Drew
 Depends on the target and the setup, ideally if you have 4 NICs you
 should be using at least two different VLANs, and since you have 4
 NICs (I assume for iSCSI only) you should use jumbo frames.

 Jumbo frames should only be used if your CPU can't keep up with the
 load of 4 NICs otherwise it does add some latency to iSCSI.

How much latency are we talking about?

I'm looking into gigabit networks at for both home  work and while
home isn't so much an issue, work could be as I'm in the planning
stages of setting up a gigabit network to allow some iSCSI NAS/SAN
devices to back a bunch of VMware servers that we're due to get for
our next refresh.


-- 
Drew

Nothing in life is to be feared. It is only to be understood.
--Marie Curie
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipath and iSCSI Targets

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 4:52 PM, Drew drew@gmail.com wrote:

 Depends on the target and the setup, ideally if you have 4 NICs you
 should be using at least two different VLANs, and since you have 4
 NICs (I assume for iSCSI only) you should use jumbo frames.

 Jumbo frames should only be used if your CPU can't keep up with the
 load of 4 NICs otherwise it does add some latency to iSCSI.

 How much latency are we talking about?

That often depends on the NICs and switches, but in the realm of  
10-15% off throughput.

 I'm looking into gigabit networks at for both home  work and while
 home isn't so much an issue, work could be as I'm in the planning
 stages of setting up a gigabit network to allow some iSCSI NAS/SAN
 devices to back a bunch of VMware servers that we're due to get for
 our next refresh.

Set it up jumbo frame capable, but start out with standard frames and  
if you see the CPU getting pegged, try interrupt coalesence, if that  
isn't cutting it use jumbo frames.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Multipath and iSCSI Targets

2010-03-19 Thread Joseph L. Casale
Just started messing with multipath against an iSCSI target with 4 nics.
What should one expect as behavior when paths start failing? My lab setup
was copying some data on a mounted block device when I dropped 3 of 4 paths
and the responsiveness of the server completely tanked for several minutes.

Is that still expected?

Thanks,
jlc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipath and iSCSI Targets

2010-03-19 Thread nate
Joseph L. Casale wrote:
 Just started messing with multipath against an iSCSI target with 4 nics.
 What should one expect as behavior when paths start failing? My lab setup
 was copying some data on a mounted block device when I dropped 3 of 4 paths
 and the responsiveness of the server completely tanked for several minutes.

 Is that still expected?

Depends on the target and the setup, ideally if you have 4 NICs you
should be using at least two different VLANs, and since you have 4 NICs
(I assume for iSCSI only) you should use jumbo frames.

With my current 3PAR storage arrays and my iSCSI targets each system
has 4 targets but usually 1 NIC, my last company(same kind of storage)
I had 4 targets and 2 dedicated NICs(each on it's own VLAN for routing
purposes and jumbo frames).

In all cases MPIO was configured for round robin, and failed over
in a matter of seconds.

Failing BACK can take some time depending on how long the path was
down for, at least on CentOS 4.x (not sure on 5.x) there was some
hard coded timeouts in the iSCSI system that could delay path
restoration for a  minute or more because there was a somewhat
exponential back off timer for retries, this caused me a big
headache at one point doing a software upgrade on our storage array
which will automatically roll itself back if all of the hosts do
not re-login to the array within ~60 seconds of the controller coming
back online.

If your iSCSI storage system is using active/passive controllers
that may increase fail over and fail back times and complicate
stuff, my arrays are all active-active.

nate


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Multipath and iSCSI Targets

2010-03-19 Thread Ross Walker
On Mar 19, 2010, at 11:12 AM, nate cen...@linuxpowered.net wrote:

 Joseph L. Casale wrote:
 Just started messing with multipath against an iSCSI target with 4  
 nics.
 What should one expect as behavior when paths start failing? My lab  
 setup
 was copying some data on a mounted block device when I dropped 3 of  
 4 paths
 and the responsiveness of the server completely tanked for several  
 minutes.

 Is that still expected?

 Depends on the target and the setup, ideally if you have 4 NICs you
 should be using at least two different VLANs, and since you have 4  
 NICs
 (I assume for iSCSI only) you should use jumbo frames.

Jumbo frames should only be used if your CPU can't keep up with the  
load of 4 NICs otherwise it does add some latency to iSCSI.


 With my current 3PAR storage arrays and my iSCSI targets each system
 has 4 targets but usually 1 NIC, my last company(same kind of storage)
 I had 4 targets and 2 dedicated NICs(each on it's own VLAN for routing
 purposes and jumbo frames).

 In all cases MPIO was configured for round robin, and failed over
 in a matter of seconds.

 Failing BACK can take some time depending on how long the path was
 down for, at least on CentOS 4.x (not sure on 5.x) there was some
 hard coded timeouts in the iSCSI system that could delay path
 restoration for a  minute or more because there was a somewhat
 exponential back off timer for retries, this caused me a big
 headache at one point doing a software upgrade on our storage array
 which will automatically roll itself back if all of the hosts do
 not re-login to the array within ~60 seconds of the controller coming
 back online.

 If your iSCSI storage system is using active/passive controllers
 that may increase fail over and fail back times and complicate
 stuff, my arrays are all active-active.

I would check the dm-multipath comfig for how it handles errors, it  
might retry multiple times before marking a path bad.

That will slow things to a crawl.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath

2010-01-12 Thread Paras pradhan
Upgrding the kernel to 2.6.18-164.10.1.el5xen brought back /dev/dm-* and now
multipath -ll has the ouput.

I am curious about what has caused the lost of /dev/dm* in my previous
kernel.

Any ideas?

Paras.


On Mon, Jan 11, 2010 at 8:09 PM, Paras pradhan pradhanpa...@gmail.comwrote:

 Yes every thing's loaded.

 Here is the output:

 [r...@cvprd1 ~]# lsmod | grep dm
 dm_round_robin 36801  0
 rdma_cm68565  1 ib_iser
 ib_cm  73449  1 rdma_cm
 iw_cm  43465  1 rdma_cm
 ib_sa  75209  2 rdma_cm,ib_cm
 ib_core   105157  6 ib_iser,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
 ib_addr41929  1 rdma_cm
 dm_multipath   56153  1 dm_round_robin
 scsi_dh41665  1 dm_multipath
 dm_raid45  99401  0
 dm_message 36289  1 dm_raid45
 dm_region_hash 46273  1 dm_raid45
 dm_mem_cache   39489  1 dm_raid45
 dm_snapshot52105  0
 dm_zero35265  0
 dm_mirror  54737  0
 dm_log 44993  3 dm_raid45,dm_region_hash,dm_mirror
 dm_mod101521  11
 dm_multipath,dm_raid45,dm_snapshot,dm_zero,dm_mirror,dm_log



 Thanks
 Paras.




 On Mon, Jan 11, 2010 at 6:33 PM, nate cen...@linuxpowered.net wrote:

 Paras pradhan wrote:
  Hi.
 
  Somehow I do not see any out put using multipath -l or multipath -ll .
 But I
  can see using dry run ie multipath -d.
 
  Also I do not see /dev/dm-*
 
  It was there before. How do I re claim it.

 Are the modules loaded?

 [r...@dc1-mysql001a:~]# lsmod | grep dm
 dm_zero35265  0
 dm_mirror  60617  0
 dm_round_robin 36801  1
 dm_multipath   52945  2 dm_round_robin
 dm_mod 99737  17 dm_zero,dm_mirror,dm_multipath
 [r...@dc1-mysql001a:~]# multipath -l
 350002ac0006a0714dm-1 3PARdata,VV
 [size=1.0T][features=0][hwhandler=0]
 \_ round-robin 0 [prio=0][active]
  \_ 1:0:0:3 sdc 8:32  [active][undef]
  \_ 1:0:1:3 sde 8:64  [active][undef]
  \_ 2:0:0:3 sdg 8:96  [active][undef]
  \_ 2:0:1:3 sdi 8:128 [active][undef]
 350002ac000790714dm-0 3PARdata,VV
 [size=2.0T][features=0][hwhandler=0]
 \_ round-robin 0 [prio=0][active]
  \_ 1:0:0:2 sdb 8:16  [active][undef]
  \_ 1:0:1:2 sdd 8:48  [active][undef]
  \_ 2:0:0:2 sdf 8:80  [active][undef]
  \_ 2:0:1:2 sdh 8:112 [active][undef]



 nate


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath

2010-01-12 Thread Paras pradhan
Can anybody tell me if my multipath o/p is fine?

I have a active/active setup and when I unplug the FC SAN cable on one of
the ports of my HBA my host is being un responsive and need to reboot it.

When I do

multipath -ll

I can see:

[r...@cvprd2 etc]# multipath -ll
mpath2 (360060e8004770d00770d018c) dm-1 HITACHI,OPEN-V*10
[size=335G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
 \_ 6:0:0:1 sdc 8:32  [active][ready]
 \_ 6:0:0:16385 sde 8:64  [active][ready]
 \_ 6:0:0:32769 sdg 8:96  [active][ready]
 \_ 6:0:0:49153 sdi 8:128 [active][ready]
 \_ 5:0:0:1 sdk 8:160 [active][ready]
 \_ 5:0:0:16385 sdm 8:192 [active][ready]
 \_ 5:0:0:32769 sdo 8:224 [active][ready]
 \_ 5:0:0:49153 sdq 65:0  [active][ready]
mpath1 (360060e8004770d00770d0154) dm-0 HITACHI,OPEN-V*9
[size=301G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
 \_ 6:0:0:0 sdb 8:16  [active][ready]
 \_ 6:0:0:16384 sdd 8:48  [active][ready]
 \_ 6:0:0:32768 sdf 8:80  [active][ready]
 \_ 6:0:0:49152 sdh 8:112 [active][ready]
 \_ 5:0:0:0 sdj 8:144 [active][ready]
 \_ 5:0:0:16384 sdl 8:176 [active][ready]
 \_ 5:0:0:32768 sdn 8:208 [active][ready]
 \_ 5:0:0:49152 sdp 8:240 [active][ready]
[r...@cvprd2 etc]#


--

But when I do multipath -v2 i donot see anything..


[r...@cvprd2 etc]# multipath -v2
[r...@cvprd2 etc]#


--

My multipath.conf is as below:

[r...@cvprd1 etc]# more multipath.conf
# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated


# Blacklist all devices by default. Remove this to enable multipathing
# on the default devices.
blacklist {
devnode ^sda
}

## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
polling_interval 5
no_path_retry 3
failover immediate
path_grouping_policy multibus
rr_weight priorities
path_checker readsector0
}
devices {
device {
vendor  HITACHI
product OPEN-V
path_grouping_policymultibus
path_checkerreadsector0
getuid_callout  /sbin/scsi_id -g -u -p0x80 -s
/block/%n
}
}
[r...@cvprd1 etc]#


Thanks !
Paras




On Tue, Jan 12, 2010 at 10:37 AM, Paras pradhan pradhanpa...@gmail.comwrote:

 Upgrding the kernel to 2.6.18-164.10.1.el5xen brought back /dev/dm-* and
 now multipath -ll has the ouput.

 I am curious about what has caused the lost of /dev/dm* in my previous
 kernel.

 Any ideas?

 Paras.


 On Mon, Jan 11, 2010 at 8:09 PM, Paras pradhan pradhanpa...@gmail.comwrote:

 Yes every thing's loaded.

 Here is the output:

 [r...@cvprd1 ~]# lsmod | grep dm
 dm_round_robin 36801  0
 rdma_cm68565  1 ib_iser
 ib_cm  73449  1 rdma_cm
 iw_cm  43465  1 rdma_cm
 ib_sa  75209  2 rdma_cm,ib_cm
 ib_core   105157  6 ib_iser,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
 ib_addr41929  1 rdma_cm
 dm_multipath   56153  1 dm_round_robin
 scsi_dh41665  1 dm_multipath
 dm_raid45  99401  0
 dm_message 36289  1 dm_raid45
 dm_region_hash 46273  1 dm_raid45
 dm_mem_cache   39489  1 dm_raid45
 dm_snapshot52105  0
 dm_zero35265  0
 dm_mirror  54737  0
 dm_log 44993  3 dm_raid45,dm_region_hash,dm_mirror
 dm_mod101521  11
 dm_multipath,dm_raid45,dm_snapshot,dm_zero,dm_mirror,dm_log



 Thanks
 Paras.




 On Mon, Jan 11, 2010 at 6:33 PM, nate cen...@linuxpowered.net wrote:

 Paras pradhan wrote:
  Hi.
 
  Somehow I do not see any out put using multipath -l or multipath -ll .
 But I
  can see using dry run ie multipath -d.
 
  Also I do not see /dev/dm-*
 
  It was there before. How do I re claim it.

 Are the modules loaded?

 [r...@dc1-mysql001a:~]# lsmod | grep dm
 dm_zero35265  0
 dm_mirror  60617  0
 dm_round_robin 36801  1
 dm_multipath   52945  2 dm_round_robin
 dm_mod 99737  17 dm_zero,dm_mirror,dm_multipath
 [r...@dc1-mysql001a:~]# multipath -l
 350002ac0006a0714dm-1 3PARdata,VV
 [size=1.0T][features=0][hwhandler=0]
 \_ round-robin 0 [prio=0][active]
  \_ 1:0:0:3 sdc 8:32  [active][undef]
  \_ 1:0:1:3 sde 8:64  [active][undef]
  \_ 2:0:0:3 sdg 8:96  [active][undef]
  \_ 2:0:1:3 sdi 8:128 [active][undef]
 350002ac000790714dm-0 3PARdata,VV
 [size=2.0T][features=0][hwhandler=0]
 \_ round-robin 0 [prio=0][active]
  \_ 1:0:0:2 sdb 8:16  [active][undef]
  \_ 1:0:1:2 sdd 8:48  [active][undef]
  \_ 2:0:0:2 sdf 8:80  [active][undef]
  \_ 2:0:1:2 sdh 8:112 [active][undef]



Re: [CentOS] multipath

2010-01-12 Thread nate
Paras pradhan wrote:
 Can anybody tell me if my multipath o/p is fine?

Hitachi support should be able to

 I have a active/active setup and when I unplug the FC SAN cable on one of
 the ports of my HBA my host is being un responsive and need to reboot it.

Sounds like it is mis configured, doing some searches I found several
references to the /sbin/pp_hds_modular command, which you may need,
in this situation it's best to deal with your vendor support they
should have instructions for supported MPIO configurations(whether or
not linux device mapper is supported I don't know).

Also double/triple check that your array is truely active-active, from
what I recall you had a really old HDS system, I don't recall many of
them being active-active(in the sense that you can access the same
volume over multiple controllers simultaneously vs having different
volumes exported over different controllers).

For modular active-active I was in a presentation last year by HDS
for their new AMS2k line up and they claimed that was the first
modular active-active array(at least from them), others have had
modular active-active for a few years now, they tried to claim they
were the first despite that not being true.

http://www.hds.com/products/storage-systems/adaptable-modular-storage-2000-family/index.html

For me if I yank a cable within a couple seconds the system
recovers, my vendor provides extremely specific step-by-step
instructions for configuring, including modifications to the
multipath init script for proper operation(7 pages in a PDF just
for MPIO configuration and examples).

nate


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] multipath

2010-01-11 Thread Paras pradhan
Hi.

Somehow I do not see any out put using multipath -l or multipath -ll . But I
can see using dry run ie multipath -d.

Also I do not see /dev/dm-*

It was there before. How do I re claim it.


Thanks!
Paras.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath

2010-01-11 Thread nate
Paras pradhan wrote:
 Hi.

 Somehow I do not see any out put using multipath -l or multipath -ll . But I
 can see using dry run ie multipath -d.

 Also I do not see /dev/dm-*

 It was there before. How do I re claim it.

Are the modules loaded?

[r...@dc1-mysql001a:~]# lsmod | grep dm
dm_zero35265  0
dm_mirror  60617  0
dm_round_robin 36801  1
dm_multipath   52945  2 dm_round_robin
dm_mod 99737  17 dm_zero,dm_mirror,dm_multipath
[r...@dc1-mysql001a:~]# multipath -l
350002ac0006a0714dm-1 3PARdata,VV
[size=1.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:0:3 sdc 8:32  [active][undef]
 \_ 1:0:1:3 sde 8:64  [active][undef]
 \_ 2:0:0:3 sdg 8:96  [active][undef]
 \_ 2:0:1:3 sdi 8:128 [active][undef]
350002ac000790714dm-0 3PARdata,VV
[size=2.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:0:2 sdb 8:16  [active][undef]
 \_ 1:0:1:2 sdd 8:48  [active][undef]
 \_ 2:0:0:2 sdf 8:80  [active][undef]
 \_ 2:0:1:2 sdh 8:112 [active][undef]



nate


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath

2010-01-11 Thread Paras pradhan
Yes every thing's loaded.

Here is the output:

[r...@cvprd1 ~]# lsmod | grep dm
dm_round_robin 36801  0
rdma_cm68565  1 ib_iser
ib_cm  73449  1 rdma_cm
iw_cm  43465  1 rdma_cm
ib_sa  75209  2 rdma_cm,ib_cm
ib_core   105157  6 ib_iser,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
ib_addr41929  1 rdma_cm
dm_multipath   56153  1 dm_round_robin
scsi_dh41665  1 dm_multipath
dm_raid45  99401  0
dm_message 36289  1 dm_raid45
dm_region_hash 46273  1 dm_raid45
dm_mem_cache   39489  1 dm_raid45
dm_snapshot52105  0
dm_zero35265  0
dm_mirror  54737  0
dm_log 44993  3 dm_raid45,dm_region_hash,dm_mirror
dm_mod101521  11
dm_multipath,dm_raid45,dm_snapshot,dm_zero,dm_mirror,dm_log



Thanks
Paras.



On Mon, Jan 11, 2010 at 6:33 PM, nate cen...@linuxpowered.net wrote:

 Paras pradhan wrote:
  Hi.
 
  Somehow I do not see any out put using multipath -l or multipath -ll .
 But I
  can see using dry run ie multipath -d.
 
  Also I do not see /dev/dm-*
 
  It was there before. How do I re claim it.

 Are the modules loaded?

 [r...@dc1-mysql001a:~]# lsmod | grep dm
 dm_zero35265  0
 dm_mirror  60617  0
 dm_round_robin 36801  1
 dm_multipath   52945  2 dm_round_robin
 dm_mod 99737  17 dm_zero,dm_mirror,dm_multipath
 [r...@dc1-mysql001a:~]# multipath -l
 350002ac0006a0714dm-1 3PARdata,VV
 [size=1.0T][features=0][hwhandler=0]
 \_ round-robin 0 [prio=0][active]
  \_ 1:0:0:3 sdc 8:32  [active][undef]
  \_ 1:0:1:3 sde 8:64  [active][undef]
  \_ 2:0:0:3 sdg 8:96  [active][undef]
  \_ 2:0:1:3 sdi 8:128 [active][undef]
 350002ac000790714dm-0 3PARdata,VV
 [size=2.0T][features=0][hwhandler=0]
 \_ round-robin 0 [prio=0][active]
  \_ 1:0:0:2 sdb 8:16  [active][undef]
  \_ 1:0:1:2 sdd 8:48  [active][undef]
  \_ 2:0:0:2 sdf 8:80  [active][undef]
  \_ 2:0:1:2 sdh 8:112 [active][undef]



 nate


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] multipath using defaults rather than multipath.conf contents for some devices (?) - why ?

2009-09-16 Thread McCulloch, Alan
hi all

We have a rh linux server connected to two HP SAN controllers, one an HSV200 
(on the way out),
the other an HSV400 (on the way in). (Via a Qlogic HBA).

/etc/multipath.conf contains this :

device
{
vendor  (COMPAQ|HP)
product HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]0
getuid_callout  /sbin/scsi_id -g -u -s /block/%n
prio_callout/sbin/mpath_prio_alua /dev/%n
hardware_handler 0
path_selector   round-robin 0
path_grouping_policygroup_by_prio
failbackimmediate
rr_weight   uniform
no_path_retry   18
rr_min_io   100
path_checkertur
}

- but our actual multipathing as shown by multipath -ll , and multipath -ll -v 
3 looks as though for the
HSV400 it is using the defaults rather  than these settings. The defaults are

#defaults {
#   udev_dir/dev
#   polling_interval10
#   selectorround-robin 0
#   path_grouping_policymultibus
#   getuid_callout  /sbin/scsi_id -g -u -s /block/%n
#   prio_callout/bin/true
#   path_checkerreadsector0
#   rr_min_io   100
#   rr_weight   priorities
#   failbackimmediate
#   no_path_retry   fail
#   user_friendly_name  yes


and multipath -ll reports :

.
.
[snip other HSV400 paths - all similar]
mpath12 (3600508b40007518f9052) dm-1 HP,HSV400
[size=150G][features=0][hwhandler=0]
\_ round-robin 0 [prio=1][active]
 \_ 0:0:5:9  sdab 65:176 [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 0:0:3:9  sdn  8:208  [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 0:0:4:9  sdu  65:64  [active][ready]
mpath11 (3600508b40007518f7037) dm-6 HP,HSV200
[size=200G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=50][active]
 \_ 0:0:1:7  sdd  8:48   [active][ready]
\_ round-robin 0 [prio=10][enabled]
 \_ 0:0:2:7  sdh  8:112  [active][ready]
.
.
[snip other HSV200 paths - all similar]



multipath -ll -v 3 includes explicit statements that defaults are being used 
for the HSV400

(long output snipped...)

sdaa: path checker = readsector0 (config file default)

versus

sda: path checker = tur (controller setting)

sdx: getprio = NULL (internal default)

versus

sdd: getprio = /sbin/mpath_prio_alua %n (controller setting)



- furthermore we see in the log file messages from both readsector0 *and* tur
rather than just tur if the correct settings were used , which also backs that 
up.

My questions are basically - why is it happening , and how to fix it ?

The vendor and product regexps definitely do match both HSP and both HSV200 
and HSV400 respectively
so it doesn't seem that fiddling with the patterns will work , and I'm sure 
this config has been tested.

Its not due to this server having to deal with two controllers - we have a 
second server that only mounts from
the HSV400, and  its multipath settings appear to be entirely the defaults, and 
not what we have set.

(And conversely, its not due to the conf file not being read at all - since the 
server with two controllers
is using the correct config for one of them , but not the other.)

thanks for any tips and I will summarise.

Cheers

AMcC



===
Attention: The information contained in this message and/or attachments
from AgResearch Limited is intended only for the persons or entities
to which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipients is prohibited by AgResearch
Limited. If you have received this message in error, please notify the
sender immediately.
===
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath using defaults rather than multipath.conf contents for some devices (?) - why ?

2009-09-16 Thread nate
McCulloch, Alan wrote:
 hi all

 We have a rh linux server connected to two HP SAN controllers, one an HSV200
 (on the way out),
 the other an HSV400 (on the way in). (Via a Qlogic HBA).

 /etc/multipath.conf contains this :

 device
 {
 vendor  (COMPAQ|HP)
 product HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]0

What does /proc/scsi/scsi say? Perhaps your config doesn't
match what the system is being presented.

For my MPIO my vendor suggested setting the defaults manually
in the config file so my config file is:

defaults {
   udev_dir/dev
   polling_interval 10
   default_selectorround-robin 0
   default_path_grouping_policymultibus
   default_getuid_callout  /sbin/scsi_id -g -u -s /block/%n
   default_prio_callout/bin/true
   rr_wmin_io  100
   rr_weightpriorities
   failbackimmediate
}

blacklist {
wwid 26353900f02796769
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
devnode ^hd[a-z][0-9]*
devnode ^cciss*
}

devices {

device {
vendor 3PARdata
product VV 
path_grouping_policy multibus
path_checker tur
no_path_retry 60
}
}

multipath -l
350002ac0006a0714dm-1 3PARdata,VV
[size=1.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:0:3 sdc 8:32  [active][undef]
 \_ 1:0:1:3 sde 8:64  [active][undef]
 \_ 2:0:0:3 sdg 8:96  [active][undef]
 \_ 2:0:1:3 sdi 8:128 [active][undef]
350002ac000790714dm-0 3PARdata,VV
[size=2.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:0:2 sdb 8:16  [active][undef]
 \_ 1:0:1:2 sdd 8:48  [active][undef]
 \_ 2:0:0:2 sdf 8:80  [active][undef]
 \_ 2:0:1:2 sdh 8:112 [active][undef]

/proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 32 Lun: 00
  Vendor: DP   Model: BACKPLANERev: 1.05
  Type:   EnclosureANSI SCSI revision: 05
Host: scsi0 Channel: 02 Id: 00 Lun: 00
  Vendor: DELL Model: PERC 6/i Rev: 1.11
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 01 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 01 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 01 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 01 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05

nate


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] multipath using defaults rather than multipath.conf contents for some devices (?) - why ?

2009-09-16 Thread McCulloch, Alan
thanks

whats being presented as reported by /proc/scsi/scsi 
seems to match the keys in /etc/multipath.conf :


[r...@illuminati scsi]# cat scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: HP   Model: Ultrium 3-SCSI   Rev: M23Z
  Type:   Sequential-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: HP   Model: HSV200   Rev: 6220
  Type:   RAID ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 02 Lun: 00
  Vendor: HP   Model: HSV200   Rev: 6220
  Type:   RAID ANSI SCSI revision: 05
.
.
.
Host: scsi0 Channel: 00 Id: 03 Lun: 04
  Vendor: HP   Model: HSV400   Rev: 0952
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 03 Lun: 05
  Vendor: HP   Model: HSV400   Rev: 0952
  Type:   Direct-AccessANSI SCSI revision: 05
.
.
.

Thanks for the suggestion about setting the defaults - I'll check 
with product vendor.

(Re the trailing whitespace after the product name - so some devices
present with a trailing whitespace in the name ?)

Still curious about why the defaults are being used.

cheers

Alan McCulloch



-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of 
nate
Sent: Thursday, 17 September 2009 1:08 p.m.
To: centos@centos.org
Subject: Re: [CentOS] multipath using defaults rather than multipath.conf 
contents for some devices (?) - why ?

McCulloch, Alan wrote:
 hi all

 We have a rh linux server connected to two HP SAN controllers, one an HSV200
 (on the way out),
 the other an HSV400 (on the way in). (Via a Qlogic HBA).

 /etc/multipath.conf contains this :

 device
 {
 vendor  (COMPAQ|HP)
 product HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]0

What does /proc/scsi/scsi say? Perhaps your config doesn't
match what the system is being presented.

For my MPIO my vendor suggested setting the defaults manually
in the config file so my config file is:

defaults {
   udev_dir/dev
   polling_interval 10
   default_selectorround-robin 0
   default_path_grouping_policymultibus
   default_getuid_callout  /sbin/scsi_id -g -u -s /block/%n
   default_prio_callout/bin/true
   rr_wmin_io  100
   rr_weightpriorities
   failbackimmediate
}

blacklist {
wwid 26353900f02796769
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
devnode ^hd[a-z][0-9]*
devnode ^cciss*
}

devices {

device {
vendor 3PARdata
product VV 
path_grouping_policy multibus
path_checker tur
no_path_retry 60
}
}

multipath -l
350002ac0006a0714dm-1 3PARdata,VV
[size=1.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:0:3 sdc 8:32  [active][undef]
 \_ 1:0:1:3 sde 8:64  [active][undef]
 \_ 2:0:0:3 sdg 8:96  [active][undef]
 \_ 2:0:1:3 sdi 8:128 [active][undef]
350002ac000790714dm-0 3PARdata,VV
[size=2.0T][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
 \_ 1:0:0:2 sdb 8:16  [active][undef]
 \_ 1:0:1:2 sdd 8:48  [active][undef]
 \_ 2:0:0:2 sdf 8:80  [active][undef]
 \_ 2:0:1:2 sdh 8:112 [active][undef]

/proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 32 Lun: 00
  Vendor: DP   Model: BACKPLANERev: 1.05
  Type:   EnclosureANSI SCSI revision: 05
Host: scsi0 Channel: 02 Id: 00 Lun: 00
  Vendor: DELL Model: PERC 6/i Rev: 1.11
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 01 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 01 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 01 Lun: 02
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 01 Lun: 03
  Vendor: 3PARdata Model: VV   Rev: 
  Type:   Direct-AccessANSI SCSI revision: 05

nate

[CentOS] multipath using 2 NICs (or HBA?)

2007-10-23 Thread Scott Moseman
I'm told that we cannot do Multipath I/O on our iSCSI SAN on RHEL
with 2 network cards.  I could use 1 network card, but need an HBA.
Is this true?  Do I need an HBA, or can I do Multipath using 2 NICs?
We're running RHEL 4 and CentOS 4 and 5 servers on this network.

I have been reading through the device-mapper documentation, but I
have not found anything (unless I'm not clear on what they're saying).
Eventually I will give it a try but I haven't had the time to fiddle so far.

Thanks,
Scott
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos