Re: [LARTC] bandwidth aggregation between 2 hosts in the same subnet
a way to split your backup problem into pieces, and run them concurrently? That can be a much easier problem to tackle, given that it's trivial to add extra IP addresses to the hosts on each end, and presumably your higher end Cisco gear will permit a load-balance algorithm other than straight MAC address XOR. E.g., the 2960 I've got handy permits: slime(config)#port-channel load-balance ? dst-ip Dst IP Addr dst-mac Dst Mac Addr src-dst-ip Src XOR Dst IP Addr src-dst-mac Src XOR Dst Mac Addr src-ip Src IP Addr src-mac Src Mac Addr so it's possible to get the IP address into the port selection math, and adding IP addresses is pretty straightforward. -J --- -Jay Vosburgh, IBM Linux Technology Center, [EMAIL PROTECTED] ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] How to check an inactive slave in a bond?
. The down side is that a failover will probably lose at least a few packets, and you'll have to arrange your script or whatever to stop if you experience an actual failure. -J --- -Jay Vosburgh, IBM Linux Technology Center, [EMAIL PROTECTED] ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] Somewhat basic routing question
Hans du Plooy [EMAIL PROTECTED] wrote: [...] Will this work with private two network cards, two private IPs, and two gateways in the same IP range? eth0 192.168.1.18 with gw 192.168.1.6 and eth1 192.168.1.17 with gw 192.168.1.1. The two gateways are NAT-ing firewalls, will this make a difference? I don't know if the NAT business will make a difference, but I've set up multiple-network multiple-gateway configurations more or less like this (substituting your own network values): Configure with policy routes such that responses to inbound traffic for the respective interfaces is routed back out over the same interface. For example: ip rule add from 10.176.13/24 table 50 ip rule add from 10.176.14/24 table 60 For your purposes, ip rule add iif ethX may work better (since the network match won't necessarily segregate anything, as both of your interfaces are on the same network). ip route add table 50 10.176.13/24 dev ethX src 10.176.13.x ip route add table 50 default dev ethX src 10.176.13.x via 10.176.13.1 Where 10.176.13.1 is the gateway for that particular network (or interface, in your case), and 10.176.13.x is the host's IP address on that network. The other network, 10.176.14/24 on table 60 in this example, is configured similarly, but with the appropriate .14 network values. A global default route can be left in the main routing table for traffic not originating inbound from 10.176.13 or 10.176.14 (or via the appropriate iif, depending on how you set it up). I think you'd need to test a bit to check for the proper configuration, which may be hard via only remote access. -J --- -Jay Vosburgh, IBM Linux Technology Center, [EMAIL PROTECTED] ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] how can I monitor a (dumb) switch ?
Dmytro O. Redchuk [EMAIL PROTECTED] wrote: On Fri, Jun 03, 2005 at 11:24:13AM +0300, Radu CUGUT wrote: I have an ethernet LAN, made over dumb fast-ethernet switches (10/100mbit) without management, so there is no IP for the switches. What I want, if possible, is to find out if a switch is down or not. [...] Plug two more cards into you linux box, connect them both to the switch, make them an interfaces of one bridge inside you linux box and bring up STP over there. I guess bridge should detect a loop quickly and block one port. Then you'll be able to monitor bridge's state. I suspect that the bonding driver would also do what you're looking for. Docs can be found at: http://sourceforge.net/projects/bonding It can either monitor the link state, or issue ARP probes (as somebody else suggested) to check connectivity to a peer on the local network. Judging from my experience with managed switches, I suspect that the bonding driver (in active-backup mode, for example) would detect link failure faster than STP. -J --- -Jay Vosburgh, IBM Linux Technology Center, [EMAIL PROTECTED] ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] One 200Mbps virtual link between 2 ethernet adaptators of 2 linux boxes.
Francois [EMAIL PROTECTED] wrote: --- | A | | | --- ___|__ |switch| |__| --- | | --- | B |eth0--- ---eth0| C | | |eth1-eth1| | --- --- Machine A: (192.168.1.10) PC used to configure BC (the only one that has a screen) Machine BC: Very simple bonding configuration: modprobe bonding mode=1 ip addr add dev bond0 192.168.1.1/24 brd +#for B and .2 for C ip link set bond0 up ip link set eth0 up ip link set eth1 up ifenslave bond0 eth0 eth1 The bad thing is: B pinging C has 50% packet lost which would mean assuming that the round robin of the module works that a route from one of the interfaces doesn't reach C (pinging from A to 192.168.1.1 gives also 50%). Anyone has an idea on this matter? First, if you set up bonding this way, check to see if the slaves have routes that supercede the route for the bonding master device. The slaves should not have any routes at all, all routing decisions are made against the master device. When bonding is set up by hand, the slaves can end up with routes if they are up and active prior to being enslaved. It's not generally a problem when bonding is set up at boot time. Assuming for the moment that the routing is ok, I'm also curious as to which link loses packets (the eth0s with switch or the eth1s no switch). Looking at the /var/log/messages for information from the bonding driver would also be useful; you might also look into enabling some link monitoring (just in case). Lastly, trying to get a single TCP connection to, essentially, see N interface's worth of throughput is a surprisingly difficult problem. This is a topic that comes up fairly regularly on the bonding-devel list; below is an article I posted last fall. The below references a discussion about round robin performance as it scales up to 4 adapters from a few years ago; that was done with 100 Mb/sec hardware, but the same would apply to gigabit links. As somebody else pointed out, when round robin was originally implemented in bonding, state of the art was 10 Mb/sec, one packet per interrupt, and reordering wasn't a problem. Today, with adapters that coalesce packets or drivers that implement NAPI (which does the same thing), it's very difficult to arrange for packets to all arrive in the proper order. My comments below about balance-alb not allowing a single TCP connection to see more than one interface's worth of throughput also applies to the other balance modes in bonding (other than round robin). -J --- -Jay Vosburgh, IBM Linux Technology Center, [EMAIL PROTECTED] To: Shlomi Yaakobovich [EMAIL PROTECTED] cc: Tim Mattox [EMAIL PROTECTED], [EMAIL PROTECTED] Subject: Re: [Bonding-devel] bonding and appletalk In-Reply-To: Message from Shlomi Yaakobovich [EMAIL PROTECTED] of Tue, 05 Oct 2004 14:07:39 +0200. [EMAIL PROTECTED] X-Mailer: MH-E 7.4.3; nmh 1.0.4; GNU Emacs 21.3.1 Date: Tue, 05 Oct 2004 09:59:42 -0700 From: Jay Vosburgh [EMAIL PROTECTED] Shlomi Yaakobovich [EMAIL PROTECTED] wrote: Thanks for the reply, the problem was indeed that the switch's 2 ports were not configure to load-sharing (it's an Extreme Networks 7i switch). I am giving up on using mode=0 for this type of connection fow now, since it requires too much external support, mode=6 is easier to implement on a normal network. I suppose that mode=0 works a bit faster than mode=6, is there any benchmark on the difference ? Do you guys have any idea what is the performance effects ? The summary: round-robin (mode 0) can provide a single TCP connection with more than one interface's worth of throughput, but will generally never let you reach the maximum throughput of the bond as a whole, whereas balance-alb (mode 6) will never let a single TCP connection (peer host, really) use more than one interface's worth of throughput, but it can allow you to use the overall max throughput of the bond (to multiple destinations). And, it depends on what you mean by faster. The round-robin mode (mode 0) simply stripes all traffic across the interfaces, regardless of where it's going to. For the case of a unidirectional TCP transfer, this will generally result in many, many packets received out of order. This in turn triggers TCP's congestion control algorithms (out of order packets are interpreted as lost packets, or late packets). This can be mitigated somewhat by adjusting tcp_reordering, but you're not likely to see the full bandwidth utilized by one TCP connection. This was discussed in depth on the list some time ago, see the archives at: http://sourceforge.net/mailarchive/forum.php?thread_id=1669977forum_id=2094 and look for messages titled trunking performance. The tcp_reordering value, btw, can be changed via /proc/sys/net/ipv4
Re: [LARTC] bonding problem with arp-monitoring
Anton Glinkov [EMAIL PROTECTED] wrote: First, just FYI, there is a sourceforge site just for bonding: http://sourceforge.net/projects/bonding And the associated mailing list: [EMAIL PROTECTED] I have two linux machines connected via 2 dsl lines (bonded) 192.168.0.1-eth0-dsl---2Mbit---dsl-eth0-192.168.0.2 ^-eth1-dsl---2Mbit---dsl-eth1-^ so the final figure is something like this: 192.168.0.1-bond0---4Mbit---bond0-192.168.0.2 [...] Is this a bug, or I am doing something wrong? It's hard to say without some more information; can you send your /var/log/messages? Ideally, this would include messages from setup and when you test the failure. If it's really big, then feel free to send it off list. Also, what are your network cards and dsl switches? -J --- -Jay Vosburgh, IBM Linux Technology Center, [EMAIL PROTECTED] ___ LARTC mailing list / [EMAIL PROTECTED] http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/