Thanks Patrick. please see inline. On Wed, 2010-06-23 at 08:04 -0700, Patrick wrote: > On Jun 23, 7:28 am, Christopher Barry > <[email protected]> wrote: > > This array has it's own specific MPIO drivers, > > and does not support DM-Multipath. I'm trying to get a handle on the > > differences in redundancy provided by the various layers involved in the > > connection from host to array, in a generic sense. > > What kind of array is it? Are you certain it "does not support" > multipath I/O? Multipath I/O is pretty generic... > > > For simplicity, all ports are on the same subnet. > > I actually would not do that. The design is cleaner and easier to > visualize (IMO) if you put the ports onto different subnets/VLANs. > Even better is to put each one on a different physical switch so you > can tolerate the failure of a switch.
Absolutely correct. What I was looking for were comparisons of the methods below, and wanted subnet stuff out of the way while discussing that. > > > scenario #1 > > Single (bonded) NIC, default iface, login to all controller portals. > > Here you are at the mercy of the load balancing performed by the > bonding, which is probably worse than the load-balancing performed at > higher levels. But I admit I have not tried it, so if you decide to > do some performance comparisons, please let me know what you > find. :-) > > I will skip right down to... > > > scenario #4 > > Dual NIC, iface per NIC, MPIO driver, login to all controller portals > > from each iface > > Why log into all portals from each interface? It buys you nothing and > makes the setup more complex. Just log into one target portal from > each interface and do multi-pathing among them. This will also make > your automation (much) simpler. Here I do not understand your reasoning. My understanding was I would need a session per iface to each portal to survive a controller port failure. If this assumption is wrong, please explain. > > Again, I would recommend assigning one subnet to each interface. It > is hard to convince Linux to behave sanely when you have multiple > interfaces connected to the same subnet. (Linux will tend to send all > traffic for that subnet via the same interface. Yes, you can hack > around this. But why?) > > In other words, I would do eth0 -> subnet 0 -> portal 0, eth1 -> > subnet 1 -> portal 1, eth2 -> subnet 2 -> portal 2, etc. This is very > easy to draw, explain, and reason about. Then set up multipath I/O > and you are done. > > In fact, this is exactly what I am doing myself. I have multiple > clients and multiple hardware iSCSI RAID units (Infortrend); each > interface on each client and RAID connects to a single subnet. Then I > am using cLVM to stripe among the hardware RAIDs. I am obtaining > sustained read speeds of ~1200 megabytes/second (yes, sustained; no > cache). Plus I have the redundancy of multipath I/O. > > Trying the port bonding approach is on my "to do" list, but this setup > is working so well I have not bothered yet. this is also something I am uncertain about. For instance, in the balance-alb mode, each slave will communicate with a remote ip consistently. In the case of two slaves, and two portals how would the traffic be apportioned? would it write to both simultaneously? could this corrupt the disk in any way? would it always only use a single slave/portal? > > - Pat > -- You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.
