I think I have this working by using proxyarp instead of bridging.

On the EC2 VM: leave lxdbr0 unconfigured. Then do:

sysctl net.ipv4.conf.all.forwarding=1
sysctl net.ipv4.conf.lxdbr0.proxy_arp=1
ip route add 10.0.0.40 dev lxdbr0
ip route add 10.0.0.41 dev lxdbr0
# where 10.0.0.40 and 10.0.0.41 are the IP addresses of the containers

The containers are statically configured with those IP addresses, and 10.0.0.1 as gateway.

This is sufficient to allow connectivity between the containers and other VMs in the same VPC - yay!

At this point, the containers *don't* have connectivity to the outside world. I can see the packets are being sent out with the correct source IP address (the container's) and MAC address (the EC2 VM), so I presume that the NAT in EC2 is only capable of working with the primary IP address - that's reasonable, if it's 1:1 NAT without overloading.

So there's also a need for iptables rules to NAT the container's address to the EC2 VM's address when talking to the outside world:

iptables -t nat -A POSTROUTING -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.0/8 -o eth0 -j MASQUERADE

And hey presto: containers with connectivity, albeit fairly heavily frigged.

But this is quite a useful outcome. You can run a single EC2 VM, and run multiple containers on it for separate services, reached via separate VPC IP addresses as if they were separate VMs, albeit ones without their own public IP addresses.

Regards,

Brian.
_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to