Appreciate the response. I have since disabled ipv6 configuration on the hardware nodes to work around the issue. I am still able to use ipv6 through the bridged veth interfaces from inside the CTs without any issues; ipv4 on the hardware nodes and containers works without issue.
This particular issue looks maybe to be a kernel related and not an openvz related. When the interfaces are in this broken state, traffic never leaves the interface even though the kernel reports an active/available route for the traffic. https://lkml.org/lkml/2012/3/25/13 Axton Grams On Tue, Mar 4, 2014 at 10:16 PM, Kir Kolyshkin <k...@openvz.org> wrote: > In general, network related issues are not among the easiest to reply > to, because > unless this is something very obvious, it requires some considerable time > to understand > the specifics of the setup, and no one has that time. Me included. > > As much as I wish to help, I think the best way for you is to get an > OpenVZ support contract. > That way you'll have engineers who deal with such stuff on a daily basis, > and they will surely > be able to help (or find a bug and forward it to developers). It's here: > > http://www.parallels.com/support/virtualization-suite/openvz/ > > PS please understand I don't want to sell anything, I just want to help. > > > On 02/26/2014 08:50 PM, Axton wrote: > > A little more information to add. > > *I rebooted the server, which resulted in the state where I cannot reach > ipv6 devices on the other side of my router:* > *root@cluster-02:~# ping6 google.com <http://google.com>* > PING google.com(atl14s08-in-x09.1e100.net) 56 data bytes > ping: sendmsg: Network is down > ping: sendmsg: Network is down > ^C > --- google.com ping statistics --- > 2 packets transmitted, 0 received, 100% packet loss, time 999ms > > > *Here is the route and neighborhood information while in this broken > state:* > *root@cluster-02:~# ip -6 route* > 2001:xyz:abc:40::/64 dev vmbr40 proto kernel metric 256 expires > 2147157sec mtu 1500 advmss 1440 hoplimit 0 > fe80::1 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev vmbr30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev vmbr40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1.40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1.30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev veth10000.40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev veth10000.30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > default via fe80::225:90ff:fe09:9b81 dev vmbr40 proto kernel metric 1024 > expires 11sec mtu 1500 advmss 1440 hoplimit 64 > *root@cluster-02:~# ip -6 neigh* > fe80::225:90ff:fe09:9b81 dev vmbr30 lladdr 00:25:90:09:9b:81 router STALE > 2001:xyz:abc:40::10 dev vmbr40 lladdr 00:25:90:09:9b:81 router REACHABLE > > > *Just to reconfirm things are not working after print the > route/neighborhood information:* > *root@cluster-02:~# ping6 google.com <http://google.com>* > PING google.com(atl14s08-in-x09.1e100.net) 56 data bytes > ping: sendmsg: Network is down > ping: sendmsg: Network is down > ^C > --- google.com ping statistics --- > 2 packets transmitted, 0 received, 100% packet loss, time 999ms > > > *I delete the default ipv6 route:* > *root@cluster-02:~# ip -6 route del default via fe80::225:90ff:fe09:9b81 > dev vmbr40* > > > *Still unreachable:* > *root@cluster-02:~# ping6 google.com <http://google.com>* > connect: Network is unreachable > > > *Here is the route and neighborhood information after using ip to delete > the route:* > *root@cluster-02:~# ip -6 route* > 2001:xyz:abc:40::/64 dev vmbr40 proto kernel metric 256 expires > 2147157sec mtu 1500 advmss 1440 hoplimit 0 > fe80::1 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev vmbr30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev vmbr40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1.40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1.30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev veth10000.40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev veth10000.30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > default via fe80::225:90ff:fe09:9b81 dev vmbr40 proto kernel metric 1024 > expires 10sec mtu 1500 advmss 1440 hoplimit 64 > *root@cluster-02:~# ip -6 neigh* > fe80::225:90ff:fe09:9b81 dev vmbr30 lladdr 00:25:90:09:9b:81 router STALE > 2001:xyz:abc:40::10 dev vmbr40 lladdr 00:25:90:09:9b:81 router STALE > fe80::225:90ff:fe09:9b81 dev vmbr40 lladdr 00:25:90:09:9b:81 router STALE > > > *I then attempt to re-add the route (though it does not show to have > been deleted):* > *root@cluster-02:~# ip -6 route add default via fe80::225:90ff:fe09:9b81 > dev vmbr40* > RTNETLINK answers: File exists > > > *I now attempt to access the machine on the other side of my router and > things work:* > *root@cluster-02:~# ping6 google.com <http://google.com>* > PING google.com(atl14s08-in-x01.1e100.net) 56 data bytes > 64 bytes from atl14s08-in-x01.1e100.net: icmp_seq=1 ttl=57 time=59.7 ms > 64 bytes from atl14s08-in-x01.1e100.net: icmp_seq=2 ttl=57 time=61.1 ms > ^C > --- google.com ping statistics --- > 2 packets transmitted, 2 received, 0% packet loss, time 1000ms > rtt min/avg/max/mdev = 59.782/60.448/61.114/0.666 ms > > > *Here is the route and neighborhood information after the changes above:* > *root@cluster-02:~# ip -6 neigh* > 2001:xyz:abc:40::10 dev vmbr40 lladdr 00:25:90:09:9b:81 router REACHABLE > fe80::225:90ff:fe09:9b81 dev vmbr30 lladdr 00:25:90:09:9b:81 router STALE > fe80::225:90ff:fe09:9b81 dev vmbr40 lladdr 00:25:90:09:9b:81 router > REACHABLE > *root@cluster-02:~# ip -6 route* > 2001:xyz:abc:40::/64 dev vmbr40 proto kernel metric 256 expires > 2147157sec mtu 1500 advmss 1440 hoplimit 0 > fe80::1 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev vmbr30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev vmbr40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1.40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev eth1.30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev veth10000.40 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > fe80::/64 dev veth10000.30 proto kernel metric 256 mtu 1500 advmss 1440 > hoplimit 0 > default via fe80::225:90ff:fe09:9b81 dev vmbr40 proto kernel metric 1024 > mtu 1500 advmss 1440 hoplimit 64 > > > On Wed, Feb 26, 2014 at 12:05 AM, Axton <axton.gr...@gmail.com> wrote: > >> *Synopsis: *Servers are connected to a series of vlans. When server >> boots with vz enabled in the inittab, the HN cannot reach routed ipv6 >> hosts. VEs can reach routed ipv6 hosts. >> >> I have tried to narrow down the cause of the issue to the extent that I >> can, so the information presented below uses the fewest variables required >> to illustrate the issues I see. In practice, these servers are connected >> to more than two vlans and there are many CT's on each HE, which have >> different combinations of vlan access. For the purposes of this >> conversation I am only referencing 2 vlans since I can consistently >> reproduce the issue with just 2 vlans. >> > > snipped > > Any help is appreciated. >> >> Thanks, >> Axton Grams >> > > > > _______________________________________________ > Users mailing > listUsers@openvz.orghttps://lists.openvz.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users > >
_______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users