Debian [Lenny|Squeeze], bnx2i and open-iscsi
Hello I'd like to ask what's the situation nowadays with the offloaded bnx2i driver and open-iscsi and Debian. I'm currently using Lenny with 2.6.32 which should support AFAIK the bnx2i driver, but Lenny is shipping 2.0.870~rc3-0.4.1 while Squeeze has 2.0.871.3-2. I read in past messages http://groups.google.com/group/open-iscsi/browse_thread/thread/c05d537c589b9806/2bfc91cd5df8e6e0?lnk=gst&q=debian+bnx2i#2bfc91cd5df8e6e0 that you needed an external broadcom's daemon since all the features were not included into open-iscsi, but that was more than a year ago. What's the situation now? How is 2.0.871 with offloaded BNX devices? TIA -- You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
Re: open-iscsi, Debian Etch and network disconnections
On 26 sep, 20:06, [EMAIL PROTECTED] wrote: > I do not think this is enough info. Do you just see the above then see > write errors on the FS? Or do you see a error message about the other > path failing? First I get a network disconnection (this is the source of all my problems I think), then I get the multipath message (it's in round- robin, so it's switching to use only one NIC) and then the writes error. Then I stopped the test because the other NIC would go down as well and, at least with 2.6.18, there were the risk of a kernel panic if writing on a remote volume without a network connection. > The above errors are expected and ok because of the nic port going up > and down, and they only indicate that on one path we got errors and > that multipath handled it by failing a path. Exactly :) But the problem is that the network is going up and down.. it isn't supposed to act like this... I'm just testing the remote storage speed with tiobench, not disconnecting the cable to test how multipath handles failures. So I was asking for advice because I don't know if the problem is on the Linux side or on the Infortrend side (as said, they are directly connected, no switches at all between the two) > If you run > > iscsiadm -m session -P 3 Here it is (sorry for the word wrap) logs1:~# iscsiadm -m session - P Target: iqn. 2002-10.com.infortrend:raid.sn7612961.012 Current Portal: 172.16.10.10:3260,1 Persistent Portal: 172.16.10.10:3260,1 ** Interface: ** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.1993-08.org.debian: 01.5133b441938d Iface IPaddress: 172.16.10.1 Iface HWaddress: default Iface Netdev: default SID: 1 iSCSI Connection State: LOGGED IN iSCSI Session State: Unknown Internal iscsid Session State: NO CHANGE Negotiated iSCSI params: HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 65536 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 65536 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: No MaxOutstandingR2T: 1 Attached SCSI devices: Host Number: 1 State: running scsi1 Channel 00 Id 0 Lun: 0 Attached scsi disk sdb State: running scsi1 Channel 00 Id 0 Lun: 1 Attached scsi disk sdc State: running Target: iqn. 2002-10.com.infortrend:raid.sn7612996.001 Current Portal: 172.16.0.10:3260,1 Persistent Portal: 172.16.0.10:3260,1 ** Interface: ** Iface Name: default Iface Transport: tcp Iface Initiatorname: iqn.1993-08.org.debian: 01.5133b441938d Iface IPaddress: 172.16.0.1 Iface HWaddress: default Iface Netdev: default SID: 2 iSCSI Connection State: LOGGED IN iSCSI Session State: Unknown Internal iscsid Session State: NO CHANGE Negotiated iSCSI params: HeaderDigest: None DataDigest: None MaxRecvDataSegmentLength: 65536 MaxXmitDataSegmentLength: 65536 FirstBurstLength: 65536 MaxBurstLength: 262144 ImmediateData: Yes InitialR2T: No MaxOutstandingR2T: 1 Attached SCSI devices: Host Number: 2 State: running scsi2 Channel 00 Id 0 Lun: 0 Attached scsi disk sdd State: running scsi2 Channel 00 Id 0 Lun: 1 Attached scsi disk sde State: running > You should see at least two sessions. One session would have sdb and > the other session should have some other sdX that we did not see > errors for in the log snippets. Exactly.. but as I said sdd will fail the same, I just stopped the test before because I don't have physical access to the machine and a reboot could be a PITA :) > If you run > > multipath -ll > mpath0 (3600d02300413075089029900) dm-0 IFT,S16E-R1130 [size=781G][features=0][hwhandler=0] \_ round-robin 0 [prio=1][enabled] \_ 1:0:0:0 sdb 8:16 [active][ready] \_ round-robin 0 [prio=1][active] \_ 2:0:0:0 sdd 8:48 [active][ready > Also Konrads suggestion to use 'queue_if_no_path' or no_path_retry > would fix the problem where i
open-iscsi, Debian Etch and network disconnections
Hi I'm trying to setup a Debian server as a front-end to an Infortrend SAN, but I'm experiencing problem when putting the remote storage on load with tiobench. Basically, the network interface loose link, all the time. This is an extract from the dmesg ouput when running tiobench. I'm using multipath-tools for HA and when one path fails, it switches automatically to the second path (though a second ethernet port) and then it fails as well. EXT3-fs warning: maximal mount count reached, running e2fsck is recommended EXT3 FS on dm-2, internal journal EXT3-fs: recovery complete. EXT3-fs: mounted filesystem with ordered data mode. e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: 10/100 speed: disabling TSO e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/ TX e1000: eth2: e1000_watchdog: 10/100 speed: disabling TSO connection1:0: iscsi: detected conn error (1011) e1000: eth2: e1000_watchdog: NIC Link is Down e1000: eth2: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/ TX iscsi: host reset succeeded iscsi: host reset succeeded sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK end_request: I/O error, dev sdb, sector 15552143 device-mapper: multipath: Failing path 8:16. sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT,SUGGEST_OK end_request: I/O error, dev sdb, sector 15552407 (lots of write errors) Currently I'm using etchnhalf kernel (2.6.24) with open-iscsi backported using packages of www.backports.org, but the samme happened with Etch stock 2.6.18. Any idea? One last note: the server is directly attached to the Infortrend SAN (no switches) so maybe it could be SAN fault... Thank you in advance --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/open-iscsi -~--~~~~--~~--~--~---