I use plain LXC, not LXD. is  ipvlan supported?
Also my containers have public IPs, same network as the host. This is why I
cannot use NAT.




On Fri, Mar 20, 2020 at 12:02 AM Fajar A. Nugraha <l...@fajar.net> wrote:

> On Thu, Mar 19, 2020 at 12:02 AM Saint Michael <vene...@gmail.com> wrote:
> >
> > The question is: how do we share the networking from the host to the
> containers, all of if. each container will use one IP, but they could see
> all the IPs in the host. This will solve the issue, since a single network
> interface,  single MAC address, can be associated with hundreds of IP
> addresses.
>
> If you mean "how can a container has it's own ip on the same network
> as the host, while also sharing the hosts's mac address", there are
> several ways.
>
> The most obvious one is nat. You NAT each host's IP address to
> corresponding vms.
>
>
> A new-ish (but somewhat cumbersome) method is to use ipvlan:
> https://lxd.readthedocs.io/en/latest/instances/#nictype-ipvlan
>
> e.g.:
>
> # lxc config show tiny
> ...
> devices:
>   eth0:
>     ipv4.address: 10.0.3.101
>     name: eth0
>     nictype: ipvlan
>     parent: eth0
>     type: nic
>
> set /etc/resolv.conf on the container manually, and disable network
> interface setup inside the container. You'd end up with something like
> this inside the container:
>
> tiny:~# ip ad li eth0
> 10: eth0@if65: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1500
> qdisc noqueue state UNKNOWN qlen 1000
> ...
>     inet 10.0.3.101/32 brd 255.255.255.255 scope global eth0
> ...
>
> tiny:~# ip r
> default dev eth0
>
>
> Other servers on the network will see the container using the host's MAC
>
> # arp -n 10.0.3.162 <=== the host
> Address                  HWtype  HWaddress           Flags Mask
> Iface
> 10.0.3.162               ether   00:16:3e:77:1f:92   C
>  eth0
>
> # arp -n 10.0.3.101 <=== the container
> Address                  HWtype  HWaddress           Flags Mask
> Iface
> 10.0.3.101               ether   00:16:3e:77:1f:92   C
>  eth0
>
>
> if you use plain lxc instead of lxd, look for similar configuration.
>
> --
> Fajar
> _______________________________________________
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to