I read the mail and did a first round of test before I could check the setting of the switch. Here are the transcript of the test with balance-rr.
Container : LXC container with fixed IP VMhost : The host where the LXC container runs. configured with br0 & bond0 remote_host : another host on the same bridged subnet Container : date;ping xxx.xxx.xxx.87 Mon May 30 15:40:49 UTC 2011 PING xxx.xxx.xxx.87 (xxx.xxx.xxx.87): 48 data bytes 60 bytes from xxx.xxx.xxx.92: Destination Host Unreachable Vr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 4c00 0000 0 0040 40 01 cc4e xxx.xxx.xxx.92 xxx.xxx.xxx.87 60 bytes from xxx.xxx.xxx.92: Destination Host Unreachable Vr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 4c00 0000 0 0040 40 01 cc4e xxx.xxx.xxx.92 xxx.xxx.xxx.87 60 bytes from xxx.xxx.xxx.92: Destination Host Unreachable Vr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 4c00 0000 0 0040 40 01 cc4e xxx.xxx.xxx.92 xxx.xxx.xxx.87 ^C--- xxx.xxx.xxx.87 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss VMhost : date;ping xxx.xxx.xxx.92 Mon May 30 15:41:14 EDT 2011 PING xxx.xxx.xxx.92 (xxx.xxx.xxx.92) 56(84) bytes of data. 64 bytes from xxx.xxx.xxx.92: icmp_req=9 ttl=64 time=10.1 ms 64 bytes from xxx.xxx.xxx.92: icmp_req=10 ttl=64 time=0.087 ms 64 bytes from xxx.xxx.xxx.92: icmp_req=11 ttl=64 time=0.076 ms ^C --- xxx.xxx.xxx.92 ping statistics --- 11 packets transmitted, 3 received, 72% packet loss, time 10004ms rtt min/avg/max/mdev = 0.076/3.423/10.108/4.727 ms Container : date;ping xxx.xxx.xxx.87 Mon May 30 15:41:41 UTC 2011 PING xxx.xxx.xxx.87 (xxx.xxx.xxx.87): 48 data bytes 60 bytes from xxx.xxx.xxx.92: Destination Host Unreachable Vr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 4c00 0000 0 0040 40 01 cc4e xxx.xxx.xxx.92 xxx.xxx.xxx.87 60 bytes from xxx.xxx.xxx.92: Destination Host Unreachable Vr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 4c00 0000 0 0040 40 01 cc4e xxx.xxx.xxx.92 xxx.xxx.xxx.87 60 bytes from xxx.xxx.xxx.92: Destination Host Unreachable Vr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 4c00 0000 0 0040 40 01 cc4e xxx.xxx.xxx.92 xxx.xxx.xxx.87 ^C--- xxx.xxx.xxx.87 ping statistics --- 4 packets transmitted, 0 packets received, 100% packet loss remote_host : date;ping xxx.xxx.xxx.92 lundi 30 mai 2011, 15:42:03 (UTC+0200) PING xxx.xxx.xxx.92 (xxx.xxx.xxx.92) 56(84) bytes of data. 64 bytes from xxx.xxx.xxx.92: icmp_req=1 ttl=64 time=284 ms 64 bytes from xxx.xxx.xxx.92: icmp_req=2 ttl=64 time=125 ms 64 bytes from xxx.xxx.xxx.92: icmp_req=3 ttl=64 time=134 ms ^C --- xxx.xxx.xxx.92 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 125.282/181.561/284.952/73.204 ms Container : Mon May 30 15:42:24 UTC 2011 PING xxx.xxx.xxx.87 (xxx.xxx.xxx.87): 48 data bytes 56 bytes from xxx.xxx.xxx.87: icmp_seq=0 ttl=64 time=141.506 ms 56 bytes from xxx.xxx.xxx.87: icmp_seq=1 ttl=64 time=153.311 ms 56 bytes from xxx.xxx.xxx.87: icmp_seq=2 ttl=64 time=124.973 ms ^C--- xxx.xxx.xxx.87 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max/stddev = 124.973/139.930/153.311/11.622 ms I will send you the dump data directly. Now that I have full access to our switch, I will do more tests tomorrow. As far as I know, the switch is doing automatic trunking so the switch should not be an issue. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/785668 Title: bonding inside a bridge does not update ARP correctly when bridged net accessed from within a VM -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs