Hello Joe

Ifaces are just a way to create an abstraction towards creating a unique
session. You need a unique session to be able to create logical paths to
a LUN and iface aids that.

The number of logical paths may or may not be equal to the number of
physical paths.

> On July 22, 2008 you had written to the open-iscsi mailing list.  Here
> is a link to that post:
> 
> http://groups.google.com/group/open-iscsi/msg/f94fbc5f66202e61?
> 
> In that post, you mentioned using the following:
> 
> Portal: 172.23.10.242
> iface.eth0: 172.23.49.170
> iface.eth1: 172.23.49.171


This is creating IP based ifaces. This means 2 sessions would be
created.

One session will have 172.23.49.170 as the initiator I.P. and the other
session will have 172.23.49.171 as the initiator I.P.

This is actually passed in as an iSCSI host attribute by the open-iscsi
userpace initiator to the kernel mode iscsi transport
driver(kernel/drivers/scsi/scsi_transport_iscsi.c)

These parameters and other such iface parameters are used to create a
unique iscsi session.

> 
> What I'm trying to find out is a definite answer on how the `iscsiadm
-
> m
> iface` works --specifically with respect to linux routing.  ITEC would
> like to use the iface to allow for multiple sessions to connect to the
> same volume.  However, what we're having trouble with is understanding
> what happens from an iSCSI perspective when we use the ifaces.  In
> addition, we'd like to help Oracle understand this as well (if they in
> fact have any doubts).
> 
> Do you happen to have any connections with any of the open-iscsi
> developers?  If so, would you be able to pose a question to them on
our
> behalf?
> 
> 1) iface networks - with regards to `iscsiadm -m iface` with software
> iSCSI initiators, was the design meant to work with two interfaces on
> the same network segment or with the interfaces being on different
> segments?  In the scenario where they are on the same network segment,
> standard linux routing would send any outbound traffic out whichever
> interface appears first in the list of routes.  In our situation eth2
> and eth3 being dedicated to ifaces eth2 and eth3 respectively.  When
> sessions are created out both ifaces and logged in, whenever I/O goes
> to
> eth2 (if standard linux routing is involved) all traffic would go out
> eth2 (since it happens to be the first that appears in my `netstat
-rn`
> list).  However, when iface eth3 is used for I/O, (again, if standard
> linux routing is involved) wouldn't all outbound I/O attempt to be
> routed down eth2?
> 

Yes all of this is because the general routing rules apply. 

You are seeing the above problem because the I.P.s are of the same
subnet.

You should really have two independent paths to the target portal to
achieve better performance.

For starts you would have to have the initiator I.P.s in different
subnets and route them to the target I.P.

Note that multipath based failover will still work if you had I.P.'s in
the same subnet but optimization of load requires special handling
unless the adapter you use supports offload which goes through a
different stack.


_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Reply via email to