Hi, I sent a pull request which allows oevrriding the interface in multicast mode: https://github.com/lxc/lxd/pull/3210
When writing that code, I did notice that in my earlier implementation I always selected the default interface for those, so that explains why no amount of routing trickery would help. Stéphane On Sun, Apr 23, 2017 at 04:36:43PM -0400, Ron Kelley wrote: > Thanks for the speedy reply! From my testing, the VXLAN tunnel always seems > to use eth0. After running the “ip -4 route add” command per your note > below, I disabled eth1 on one of the hosts but was still able to ping between > the two containers. I re-enabled that interface and disabled eth0; the ping > stopped. It seems the VXLAN tunnel is bound to eth0. > > By chance, is there a workaround to make this work properly? I also tried > using the macvlan interface type specifying a VXLAN tunnel interface and it > would not work either. For clarity, this is what I did: > > ip link add vxlan500 type vxlan group 239.0.0.1 dev eth1 dstport 0 id 500 > ip route -4 add 239.0.0.1 eth1 > <edit the LXD default profile; set the nictype to “macvlan”, and the parent > to “vxlan500”> > > I was hoping a raw VXLAN interface would work instead of using the LXD create > command. > > > -Ron > > > > On Apr 23, 2017, at 4:18 PM, Stéphane Graber <stgra...@ubuntu.com> wrote: > > > > Hi, > > > > VXLAN in multicast mode (as is used in your case), when no multicast > > address is specified will be using 239.0.0.1. > > > > This means that whatever route you have to reach "239.0.0.1" will be > > used by the kernel for the VXLAN tunnel, or so would I expect. > > > > > > Does: > > ip -4 route add 239.0.0.1 dev eth1 > > > > Cause the VXLAN traffic to now use eth1? > > > > If it doesn't, then that'd suggest that the multicast VXLAN interface > > does in fact get tied to a particular parent interface and we should > > therefore add an option to LXD to let you choose that interface. > > > > Stéphane > > > > On Sun, Apr 23, 2017 at 04:04:03PM -0400, Ron Kelley wrote: > >> Greetings all. > >> > >> Following Stéphane’s excellent guide on using multicast VXLAN with LXD > >> (https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/). In my > >> lab, I have setup a few servers running Ubuntu 16.04 with LXD 2.12 and > >> multiple interfaces (eth0, eth1, eth2). My goal is to setup a > >> multi-tenant computing solution using VXLAN to separate network traffic. > >> I want to dedicate eth0 as the mgmt-only interface and use eth1 (or other > >> additional interfaces) as customer-only interfaces. I have read a number > >> of guides but can’t find anything that clearly spells out how to create > >> bridged interfaces using eth1, eth2, etc for LXD. > >> > >> I can get everything working using a single “eth0” interface on my LXD > >> hosts using the following commands: > >> ----------------------------------------------------------- > >> lxc network create vxlan100 ipv4.address=none ipv6.address=none > >> tunnel.vxlan100.protocol=vxlan tunnel.vxlan100.id=100 > >> lxc launch ubuntu: testvm01 > >> lxc network attach vxlan100 testvm01 > >> ----------------------------------------------------------- > >> > >> All good so far. I created two test containers running on separate LXD > >> servers using the above VXLAN ID and gave each a static IP Address (i.e.: > >> 10.1.1.1/24 and 10.1.1.2/24). Both can ping back and forth. 100% working. > >> > >> The next step is to use eth1 instead of eth0 on my LXD servers, but I > >> can’t find a keyword in the online docs that specify which interface to > >> bind (https://github.com/lxc/lxd/blob/master/doc/networks.md). > >> > >> Any pointers/clues? > >> > >> Thanks, > >> > >> -Ron
signature.asc
Description: PGP signature
_______________________________________________ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users