Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Pasi Kärkkäinen
On Wed, Dec 30, 2009 at 11:48:31AM -0600, Kyle Schmitt wrote:
 On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie micha...@cs.wisc.edu wrote:
  So far single connections work: If I setup the box to use one NIC, I
  get one connection and can use it just fine.
  Could you send the /var/log/messages for when you run the login command
  so I can see the disk info?
 
 Sorry for the delay.  In the meanwhile I tore down the server and
 re-configured it using ethernet bonding.  It worked, according to
 iozone, provided moderately better throughput than the single
 connection I got before.  Moderately.  Measurably.  Not significantly.


If you have just a single iscsi connection/login from the initiator to the
target, then you'll have only one tcp connection, and that means bonding
won't help you at all - you'll be only able to utilize one link of the
bond.

bonding needs multiple tcp/ip connections for being able to give more
bandwidth.

 I tore it down after that and reconfigured again using MPIO, and funny
 enough, this time it worked.  I can access the lun now using two
 devices (sdb and sdd), and both ethernet devices that connect to iscsi
 show traffic.
 
 The weird thing is that aside from writing bonding was measurably
 faster than MPIO.  Does that seem right?
 

That seems a bit weird.

How did you configure multipath? Please paste your multipath settings. 

-- Pasi

 
 Here's the dmesg, if that lends any clues.  Thanks for any input!
 
 --Kyle
 
  156 lines of dmesg follows 
 
 cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
 iscsi: registered transport (cxgb3i)
 device-mapper: table: 253:6: multipath: error getting device
 device-mapper: ioctl: error adding target to table
 device-mapper: table: 253:6: multipath: error getting device
 device-mapper: ioctl: error adding target to table
 Broadcom NetXtreme II CNIC Driver cnic v2.0.0 (March 21, 2009)
 cnic: Added CNIC device: eth0
 cnic: Added CNIC device: eth1
 cnic: Added CNIC device: eth2
 cnic: Added CNIC device: eth3
 Broadcom NetXtreme II iSCSI Driver bnx2i v2.0.1e (June 22, 2009)
 iscsi: registered transport (bnx2i)
 scsi3 : Broadcom Offload iSCSI Initiator
 scsi4 : Broadcom Offload iSCSI Initiator
 scsi5 : Broadcom Offload iSCSI Initiator
 scsi6 : Broadcom Offload iSCSI Initiator
 iscsi: registered transport (tcp)
 iscsi: registered transport (iser)
 bnx2: eth0: using MSIX
 ADDRCONF(NETDEV_UP): eth0: link is not ready
 bnx2i: iSCSI not supported, dev=eth0
 bnx2i: iSCSI not supported, dev=eth0
 bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive 
 transmit flow control ON
 ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
 bnx2: eth2: using MSIX
 ADDRCONF(NETDEV_UP): eth2: link is not ready
 bnx2i: iSCSI not supported, dev=eth2
 bnx2i: iSCSI not supported, dev=eth2
 bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
 ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
 bnx2: eth3: using MSIX
 ADDRCONF(NETDEV_UP): eth3: link is not ready
 bnx2i: iSCSI not supported, dev=eth3
 bnx2i: iSCSI not supported, dev=eth3
 bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
 ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
 eth0: no IPv6 routers present
 eth2: no IPv6 routers present
 scsi7 : iSCSI Initiator over TCP/IP
 scsi8 : iSCSI Initiator over TCP/IP
 scsi9 : iSCSI Initiator over TCP/IP
 scsi10 : iSCSI Initiator over TCP/IP
   Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
 sdb : very big device. try to use READ CAPACITY(16).
 SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdb: Write Protect is off
 sdb: Mode Sense: 7d 00 00 08
 SCSI device sdb: drive cache: write through
 sdb : very big device. try to use READ CAPACITY(16).
 SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdb: Write Protect is off
 sdb: Mode Sense: 7d 00 00 08
 SCSI device sdb: drive cache: write through
  sdb:5  Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
   Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
 sdc : very big device. try to use READ CAPACITY(16).
 SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdc: test WP failed, assume Write Enabled
 sdc: asking for cache data failed
 sdc: assuming drive cache: write through
   Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
 sdc : very big device. try to use READ CAPACITY(16).
 SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdc: test WP failed, assume Write Enabled
 sde : very big device. try to use READ CAPACITY(16).
 sdc: asking for cache data failed
 sdc: assuming drive cache: write through
  sdc:5SCSI device sde: 7693604864 512-byte hdwr sectors (3939126 MB)
 sd 8:0:0:0: Device not ready: 6: Current: sense key: Not Ready
 Add. Sense: Logical 

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Kyle Schmitt
On Thu, Dec 31, 2009 at 8:23 AM, Pasi Kärkkäinen pa...@iki.fi wrote:
 If you have just a single iscsi connection/login from the initiator to the
 target, then you'll have only one tcp connection, and that means bonding
 won't help you at all - you'll be only able to utilize one link of the
 bond.

 bonding needs multiple tcp/ip connections for being able to give more
 bandwidth.

That's what I thought, but I figured it was one of the following three
possibilities:
MPIO was (mis)configured and using more overhead than bonding
OR the initiator was firing multiple concurrent requests (which you
say it doesn't, I'll believe you)
OR the san was under massively different load between the test runs
(not too likely, but possible.  Only one other lun is in use).

 That seems a bit weird.
That's what I thought, otherwise I would have just gone with it.

 How did you configure multipath? Please paste your multipath settings.

 -- Pasi

Here's the /etc/multipath.conf.  Where there other config options that
you'd need to see?

devnode_blacklist {
devnode ^sda[0-9]*
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
devnode ^hd[a-z][[0-9]*]
devnode ^cciss!c[0-9]d[0-9]*[p[0-9]*]
}
devices {
device {
vendor EMC 
product SYMMETRIX
path_grouping_policy multibus
getuid_callout /sbin/scsi_id -g -u -s /block/%n
path_selector round-robin 0
features 0
hardware_handler 0
failback immediate
}
device {
vendor DGC
product *
path_grouping_policy group_by_prio
getuid_callout /sbin/scsi_id -g -u -s /block/%n
prio_callout /sbin/mpath_prio_emc /dev/%n
hardware_handler 1 emc
features 1 queue_if_no_path
no_path_retry 300
path_checker emc_clariion
failback immediate
}
}

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Kyle Schmitt
Note, the EMC specific bits of that multipath.conf were just copied
from boxes that use FC to the SAN, and use MPIO successfully.

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.