Hi Ciprian,

Thanks for the additional information, there are a couple of notes with
this offload technique.

1.  The route/device CNIC will choose is based off the host routing
table.  (The CNIC uses the kernel function ip_route_output_key() to
determine the device to use.  This function can possibly return devices
with assigned IP addresses but are down'ed)  Could you also provide your
host routing table along with the IP address used by your network
devices too?  When looking at the routing table does it look like CNIC
is choosing the correct device to offload?

2.  In the iface files, I don't see any key/value pairs with the key,
'iface.net_ifacename'.  This is passed to the brcm_iscsiuio daemon to
determine which interface to use when ARP'ing.

Thanks again.

-Ben

On Wed, 2010-01-20 at 15:51 -0800, Ciprian Marius Vizitiu wrote: 
> Hi Benjamin,
> 
> The lack of communication looks strange: How come I still get to see two out
> of three disks from time to time? 
> 
> Anyway, back to your questions: topology is... nothing special, straight
> forward client to storage in the same VLAN. The client is a Dell M610 blade
> with an added "mezzanine" network card (another 5709 like the on-board one).
> Before anybody asks: yes, I do have the "hardware iSCSI offload key" whether
> that really matters or not. Since each card has two ports, coupled with the
> on-board BCM gives us a total of 4 visible ethx ifaces. I've assigned eth0
> to LAN, downed eth1 (since it's the same card as eth0), configured eth2 to
> be 192.168.69.22 and put eth3 also down; since bnx2 has a split personality
> I've also assigned 192.168.69.32 to the offloaded interface (that should be
> the initiator address for bnx2i). As for the target, on the same VLAN I've
> got an PS6000xv which takes from 69.11 to 69.14 with .10 being the "group
> address" (If you connect to .10 the EQL will supposedly "redirect" to any of
> the other interfaces based on load). Neither the initiator or the target are
> tagging packets per se, instead each of the ports is configured as "static
> access" in the switch. The target in turn is physically hooked to the very
> same Brocade enclosure stack of switches which serves the blades; The hwaddr
> for the eth2 iface assigned to iSCSI is 00:10:18:3B:98:04.
> 
> I've attached the files for the three disks which can (and are always)
> discovered from this IP... yeah, home come the discovery always works? Aw,
> lemme guess: discovery always uses tcp transport? :-|
> 
> 
> 
> > 
> > >From the brcm_iscsuio logs, I didn't see any communication between the
> > brcm_iscsiuio daemon and the CNIC driver.  This is a problem because
> > the brcm_iscsiuio daemon will do the initial ARP'ing.  To further
> > diagnose this problem, I was also wondering if you could describe your
> > network topology (ie. network interfaces used, which interface is the
> > iscsi offloaded connection, all the IP addresses used for your network
> > connectivity, target and initiator), and also if you could attach the
> > iface configuration files you are using with iscsid that would be very
> > helpful too.
> > 
> > Thanks again.
> > 
> > -Ben
> > 
> > On Wed, 2010-01-20 at 13:11 -0800, Ciprian Marius Vizitiu wrote:
> > > Hi everyone,
> > >
> > > Thanks a lot for the help.
> > >
> > > The requested log entries are att. and they represent the typical
> > behavior:
> > > most of the cases it looks as if the host cannot reach the EQL. Let
> > me
> > > tell you, that's NOT true, during the erred attempts, in another
> > > window, pings just flow nicely. Sometimes (apparently depending on
> > > planets position) the connection just works... but only for SOME of
> > > the drives; yet, hard as I tried, I couldn't establish a rule. :-o
> > > Again, switching to plain tcp transport (even with jumbos) works very
> > well.
> > >
> > >
> > > > -----Original Message-----
> > > > From: open-iscsi@googlegroups.com
> > > > [mailto:open-is...@googlegroups.com]
> > > > On Behalf Of Benjamin Li
> > > > Sent: Wednesday, January 20, 2010 8:28 PM
> > > > To: open-iscsi@googlegroups.com
> > > > Subject: Re: Can anybody confirm that bnx2i on 5709 cards works
> > with
> > > > Equallogic 6xxx?
> > > >
> > > > Also in addition to posting the kernel log's (/var/log/messsages)
> > > > please post the /var/log/brcm_iscsi.log file which is generated
> > from
> > > > the user space daemon, brcm_iscsiuio.  This will give a little bit
> > > > more insight on what is happening during the offload process.
> > > >
> > > > Thanks again.
> > > >
> > > > -Ben
> > > >
> > > > On Wed, 2010-01-20 at 11:12 -0800, Pasi Kärkkäinen wrote:
> > > > > On Wed, Jan 20, 2010 at 07:46:48PM +0100, Ciprian Vizitiu (GBIF)
> > > > wrote:
> > > > > > Hi,
> > > > > >
> > > > > > Can anybody here please confirm whether iSCSI offload via
> > bnx2i,
> > > > > > on RHEL 5.4, with 5709 Broadcoms towards EQLs 6000 series works
> > > > > > or
> > > > not?
> > > > > > Despite countless attempts (and latest EQL OS update) I still
> > > > > > can't match them (but then the software transport works
> > > > > > perfectly). :-|
> > > > > >
> > > > >
> > > > > Do you get any kind of errors? Check "dmesg" and
> > /var/log/messages.
> > > > > (Is there some option to make bnx2i give verbose debug?)
> > > > >
> > > > > -- Pasi
> > > > >
> > > >
> > >
> > 
> 


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to