[lxc-users] Can I setup a private nat ipv4 and a public ipv6 address at same time for a lxc2 container?

2017-05-31 Thread littlebat
Hi,
Thanks for all of your help for building so cool thing - lxc.

I have studied my question several days and searched many online resource,
but didn't resolve this. The detail is too long, I describe a brief version
below:

I have a debian 9 host server installed lxc2 server, the host server has
only one pulic ipv4 address, suppose it is 8.8.8.8, and a public /64 subnet
ipv6 pool, suppose it is 8:8:8:8::/64, and the eth0 of host ipv6 is:
8:8:8:8::1/64.

My goal is building the lxc unprivileged container, with a private nat ipv4
address, suppose it is 10.1.0.10, so I use ip forward to access container
from internet using public ipv4 plus port (suppose 8.8.8.8: forward
to/from 10.1.0.10:22). And, at same time, I want assign container a public
ipv6 address or ipv6 subnet( /112, can it be public accessed? ), so I can
access container from internet using public ipv6(suppose 8:8:8:8::10/64
port 22 or 8:8:8:8::10/112 port 22 ? ). For simplifing question, suppose
only assign a public ipv6 (not a public ipv6 subnet) address to the
container.

Util today, I can only setup both private nat ipv4(10.1.0.10) and private
nat ipv6(8:8:8:8::10/112) for the container, open ipv4 and ipv6 forward in
/etc/sysctl.conf, and using iptables and ip6tables to forward public
traffic to or from container(8.8.8.8:<->10.1.0.10:22,  8:8:8:8::1/64
port  <-> 8:8:8::10/112 port 22). This is done by create a "2.
independent bridge"(a different bridge out of thin air and link your
containers together on this bridge, but use forwarding to get it out on the
internet or to get traffic into it. debian wiki:
https://wiki.debian.org/LXC/SimpleBridge). reference: LXC host featuring
IPv6 connectivity
https://blog.cepharum.de/en/post/lxc-host-featuring-ipv6-connectivity.html

And, I can create a "1. host-shared bridge"(a bridge out of your main
network interface which will hold both the host's IP and the container's IP
addresses. debian wiki: https://wiki.debian.org/LXC/SimpleBridge). Then, I
can assign a public ipv6 address to the container. But, I can't assign a
private nat ipv4 address to the container now. So, it is no way to public
access container using ipv4 address(because the sole public ipv4 address
only avalable on host network card).

My question is:
1, Can I setup a private nat ipv4 and a public ipv6 address at same time
for a lxc2 container?

2, How to do it?
any idea or online resource link is welcome.

thanks.

-

Dashing Meng
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Forwarding DNS requests to the host /etc/hosts file

2017-05-31 Thread Mark Constable

On 01/06/17 02:34, Adil Baig wrote:

lxc config device add mycontainer etchosts disk path=/etc/hosts
source=/etc/hosts


1. Is very cool! I tried it and it works.


Yes, a good hint to know about, thanks simplyadilb.


I am more interested in 2. as it seems more future proof when we move
away from simple hosts file. Any suggestions on how to configure an
internal dns. Do I need to start another instance for dnsmasq? Can I
reuse the default? How would the container relay DNS requests to the
host?

What I do on my internal testing LAN is to setup one container with a
real DNS server (pdns + pdns_recursor) with a web frontend and I point
ALL my local computers and containers /etc/resolv.conf to this nameserver
and control all local LAN DNS resolution in one place. Because pdns can
use a MySQL backend I intend to inject entries directly into the domains
and records tables during container setup.

One thing I found is that it's quite feasible to "masquerade" a real
domain with internal LAN IPs so that containers can resolve each other
directly via LAN IPs yet the rest of world sees that domain as pointing
to my external router IP.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Forwarding DNS requests to the host /etc/hosts file

2017-05-31 Thread Andrey Repin
Greetings, Adil Baig!

> I have several containers running on host machines. The host machine is
> part of a LAN network. Each host has an update /etc/hosts file with domain 
> names to other LAN entities.

I strongly suggest you raise local DNS server.
LXC already have a dependency on dnsmasq, you can easily configure it for
fakeresolve in addition to LXC-specific setup.

> My problem is I cannot use the hostnames defined on the host inside the
> container (without actually copying the /etc/hosts file in). I'd rather not
> copy the file as I sync /etc/hosts file using Ansible, and the Ansible
> inventory cannot manage LXD containers dynamically.


> How is it possible to set up the containers so they look up entries in the
> host machines' /etc/hosts file?

My strong opinion is that /etc/hosts is a crutch from 1960's and should not be
used, there's always a better option.


-- 
With best regards,
Andrey Repin
Wednesday, May 31, 2017 23:20:01

Sorry for my terrible english...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Joshua Schaeffer
Thanks for the explanation Stéphane, I will add the device locally. I
figured it was a change in versions that caused my discrepancy.

Joshua Schaeffer

On Wed, May 31, 2017 at 11:25 AM, Stéphane Graber 
wrote:

> On Wed, May 31, 2017 at 11:01:39AM -0600, Joshua Schaeffer wrote:
> > I guess I should have mentioned an important change. When I switched from
> > BTRFS to ZFS I also went from LXD 2.0 to 2.13.
>
> Right and with LXD 2.8 we moved from always adding a container-local
> disk device to having it be inherited from the profile, which is what's
> causing the confusion here.
>
> >
> > On Wed, May 31, 2017 at 10:27 AM, Joshua Schaeffer <
> jschaeffer0...@gmail.com
> > > wrote:
> >
> > > I've recently switch from using BTRFS to ZFS backend, and my
> containers on
> > > the ZFS backend aren't inheriting the root device from my default
> profile:
> > >
> > > lxduser@raynor:~$ lxc config show fenix
> > > architecture: x86_64
> > > [snip]
> > > devices: {}
> > > ephemeral: false
> > > profiles:
> > > - default
> > > - 30_vlan_mgmt_server
>
> They ar inheriting it but you won't see the inherited stuff unless you
> pass --expanded to "lxc config show".
>
> > > The default profile, which the container is using has the root device
> with
> > > a pool specified:
> > >
> > > lxduser@raynor:~$ lxc profile show default
> > > config: {}
> > > description: Default LXD profile
> > > devices:
> > >   root:
> > > path: /
> > > pool: lxdpool
> > > type: disk
> > > name: default
> > >
> > > But the container isn't showing a root device (or any device for that
> > > matter), and I get an error when I try to set a size limit on the root
> > > device for that container:
> > >
> > > lxduser@raynor:~$ lxc config device set fenix root size 50G
> > > error: The device doesn't exist
>
> That's because it's an inherited device rather than one set at the
> container level. To override the inherited device you must add a new
> local device with the same name.
>
> lxc config device add fenix root disk pool=lxdpool path=/ size=50GB
>
> That should do the trick and will then show up in "lxc config show"
> (without --expanded) since it will be a container-local device.
>
> > > Is there a ZFS property that has to be set to get it to inherit the
> > > device? I was able to successfully create the root device on another
> > > container, but I don't want to create the device on each container, I
> just
> > > want to set it on the profile. I'm on LXD 2.13. Here is my storage
> device:
> > >
> > > lxduser@raynor:~$ lxc storage list
> > > +-++-+-+
> > > |  NAME   | DRIVER | SOURCE  | USED BY |
> > > +-++-+-+
> > > | lxdpool | zfs| lxdpool | 15  |
> > > +-++-+-+
> > >
> > > Thanks,
> > > Joshua Schaeffer
>
>
> --
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Stéphane Graber
On Wed, May 31, 2017 at 11:01:39AM -0600, Joshua Schaeffer wrote:
> I guess I should have mentioned an important change. When I switched from
> BTRFS to ZFS I also went from LXD 2.0 to 2.13.

Right and with LXD 2.8 we moved from always adding a container-local
disk device to having it be inherited from the profile, which is what's
causing the confusion here.

> 
> On Wed, May 31, 2017 at 10:27 AM, Joshua Schaeffer  > wrote:
> 
> > I've recently switch from using BTRFS to ZFS backend, and my containers on
> > the ZFS backend aren't inheriting the root device from my default profile:
> >
> > lxduser@raynor:~$ lxc config show fenix
> > architecture: x86_64
> > [snip]
> > devices: {}
> > ephemeral: false
> > profiles:
> > - default
> > - 30_vlan_mgmt_server

They ar inheriting it but you won't see the inherited stuff unless you
pass --expanded to "lxc config show".

> > The default profile, which the container is using has the root device with
> > a pool specified:
> >
> > lxduser@raynor:~$ lxc profile show default
> > config: {}
> > description: Default LXD profile
> > devices:
> >   root:
> > path: /
> > pool: lxdpool
> > type: disk
> > name: default
> >
> > But the container isn't showing a root device (or any device for that
> > matter), and I get an error when I try to set a size limit on the root
> > device for that container:
> >
> > lxduser@raynor:~$ lxc config device set fenix root size 50G
> > error: The device doesn't exist

That's because it's an inherited device rather than one set at the
container level. To override the inherited device you must add a new
local device with the same name.

lxc config device add fenix root disk pool=lxdpool path=/ size=50GB

That should do the trick and will then show up in "lxc config show"
(without --expanded) since it will be a container-local device.

> > Is there a ZFS property that has to be set to get it to inherit the
> > device? I was able to successfully create the root device on another
> > container, but I don't want to create the device on each container, I just
> > want to set it on the profile. I'm on LXD 2.13. Here is my storage device:
> >
> > lxduser@raynor:~$ lxc storage list
> > +-++-+-+
> > |  NAME   | DRIVER | SOURCE  | USED BY |
> > +-++-+-+
> > | lxdpool | zfs| lxdpool | 15  |
> > +-++-+-+
> >
> > Thanks,
> > Joshua Schaeffer


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Joshua Schaeffer
I guess I should have mentioned an important change. When I switched from
BTRFS to ZFS I also went from LXD 2.0 to 2.13.

On Wed, May 31, 2017 at 10:27 AM, Joshua Schaeffer  wrote:

> I've recently switch from using BTRFS to ZFS backend, and my containers on
> the ZFS backend aren't inheriting the root device from my default profile:
>
> lxduser@raynor:~$ lxc config show fenix
> architecture: x86_64
> [snip]
> devices: {}
> ephemeral: false
> profiles:
> - default
> - 30_vlan_mgmt_server
>
> The default profile, which the container is using has the root device with
> a pool specified:
>
> lxduser@raynor:~$ lxc profile show default
> config: {}
> description: Default LXD profile
> devices:
>   root:
> path: /
> pool: lxdpool
> type: disk
> name: default
>
> But the container isn't showing a root device (or any device for that
> matter), and I get an error when I try to set a size limit on the root
> device for that container:
>
> lxduser@raynor:~$ lxc config device set fenix root size 50G
> error: The device doesn't exist
>
> Is there a ZFS property that has to be set to get it to inherit the
> device? I was able to successfully create the root device on another
> container, but I don't want to create the device on each container, I just
> want to set it on the profile. I'm on LXD 2.13. Here is my storage device:
>
> lxduser@raynor:~$ lxc storage list
> +-++-+-+
> |  NAME   | DRIVER | SOURCE  | USED BY |
> +-++-+-+
> | lxdpool | zfs| lxdpool | 15  |
> +-++-+-+
>
> Thanks,
> Joshua Schaeffer
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Forwarding DNS requests to the host /etc/hosts file

2017-05-31 Thread Adil Baig
1. Is very cool! I tried it and it works. I am more interested in 2. as it
seems more future proof when we move away from simple hosts file. Any
suggestions on how to configure an internal dns. Do I need to start another
instance for dnsmasq? Can I reuse the default? How would the container
relay DNS requests to the host?

On Wed, May 31, 2017 at 7:02 PM, Simos Xenitellis <
simos.li...@googlemail.com> wrote:

> On Wed, May 31, 2017 at 1:46 PM, Adil Baig  wrote:
> > I have several containers running on host machines. The host machine is
> part
> > of a LAN network. Each host has an update /etc/hosts file with domain
> names
> > to other LAN entities.
> >
> > My problem is I cannot use the hostnames defined on the host inside the
> > container (without actually copying the /etc/hosts file in). I'd rather
> not
> > copy the file as I sync /etc/hosts file using Ansible, and the Ansible
> > inventory cannot manage LXD containers dynamically.
> >
> > How is it possible to set up the containers so they look up entries in
> the
> > host machines' /etc/hosts file?
> >
>
> As far as I understand, you want to get a container to have access to
> the /etc/hosts of its host.
> For this to work, the host needs to make a special arrangement in
> order to make /etc/hosts
> available to the containers.
>
> 1. You might be able to do something like
>
> lxc config device add mycontainer etchosts disk path=/etc/hosts
> source=/etc/hosts
>
> which means that the container's /etc/hosts is actually the host's
> /etc/hosts.
> The LXD containers have a stock /etc/hosts, therefore on their part
> they do not need an individual special entry.
>
> 2. Instead of having these static entries (/etc/hosts), you can
> consider using a local DNS server.
> LXD uses already a separate DNS server (dnsmasq) for the containers,
> which by default resolves all those *.lxd hostnames.
>
> Simos
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] root device isn't being inherited on ZFS storage pool

2017-05-31 Thread Joshua Schaeffer
I've recently switch from using BTRFS to ZFS backend, and my containers on
the ZFS backend aren't inheriting the root device from my default profile:

lxduser@raynor:~$ lxc config show fenix
architecture: x86_64
[snip]
devices: {}
ephemeral: false
profiles:
- default
- 30_vlan_mgmt_server

The default profile, which the container is using has the root device with
a pool specified:

lxduser@raynor:~$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  root:
path: /
pool: lxdpool
type: disk
name: default

But the container isn't showing a root device (or any device for that
matter), and I get an error when I try to set a size limit on the root
device for that container:

lxduser@raynor:~$ lxc config device set fenix root size 50G
error: The device doesn't exist

Is there a ZFS property that has to be set to get it to inherit the device?
I was able to successfully create the root device on another container, but
I don't want to create the device on each container, I just want to set it
on the profile. I'm on LXD 2.13. Here is my storage device:

lxduser@raynor:~$ lxc storage list
+-++-+-+
|  NAME   | DRIVER | SOURCE  | USED BY |
+-++-+-+
| lxdpool | zfs| lxdpool | 15  |
+-++-+-+

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Forwarding DNS requests to the host /etc/hosts file

2017-05-31 Thread Simos Xenitellis
On Wed, May 31, 2017 at 1:46 PM, Adil Baig  wrote:
> I have several containers running on host machines. The host machine is part
> of a LAN network. Each host has an update /etc/hosts file with domain names
> to other LAN entities.
>
> My problem is I cannot use the hostnames defined on the host inside the
> container (without actually copying the /etc/hosts file in). I'd rather not
> copy the file as I sync /etc/hosts file using Ansible, and the Ansible
> inventory cannot manage LXD containers dynamically.
>
> How is it possible to set up the containers so they look up entries in the
> host machines' /etc/hosts file?
>

As far as I understand, you want to get a container to have access to
the /etc/hosts of its host.
For this to work, the host needs to make a special arrangement in
order to make /etc/hosts
available to the containers.

1. You might be able to do something like

lxc config device add mycontainer etchosts disk path=/etc/hosts
source=/etc/hosts

which means that the container's /etc/hosts is actually the host's /etc/hosts.
The LXD containers have a stock /etc/hosts, therefore on their part
they do not need an individual special entry.

2. Instead of having these static entries (/etc/hosts), you can
consider using a local DNS server.
LXD uses already a separate DNS server (dnsmasq) for the containers,
which by default resolves all those *.lxd hostnames.

Simos
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Mounting squashfs inside a container

2017-05-31 Thread Kees Bos
On di, 2017-05-30 at 15:17 -0700, Ben Warren wrote:
> Hi,
> 
> I’m using an LXC to build up a rootfs for another target, and am
> unable to mount a squashfs image:
> 
> root@cd-build-dev-385:~# mount -t squashfs -r myproject.squashfs mnt
> ioctl: LOOP_SET_STATUS: Operation not permitted
> root@cd-build-dev-385:~#
> 
> If I instead use ‘unsquashfs’, I get into device creation errors:
> 
> root@cd-build-dev-385:~# unsquashfs -x myproject.squashfs 
> Parallel unsquashfs: Using 4 processors
> 13529 inodes (15282 blocks) to write
> 
> [|   
>   ]21/15282   0%
> create_inode: failed to create character device squashfs-
> root/dev/console, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/null, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/ptmx, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/urandom, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/zero, because Operation not permitted
> 
> 
> I assume the two issues are related, assuming that creation of device
> nodes within an unprivileged container is prohibited.  In my case I’m
> less concerned about security, and am using containers more for
> encapsulation.
> 
> Is there a configuration override that will allow dynamic device
> creation within a container, or another way of going about this?  I
> know that I can add device nodes externally using ‘lxc device add …’
> and have used it for creating loopback devices, but that’s static.
> 
> Environment:
> host: Ubuntu 14.04
> LXC:
> ben@ben-sc:~$ dpkg -l | grep lx[cd]
> ii  liblxc1   2.0.7-
> 0ubuntu1~14.04.1skyport1 amd64Linux Containers
> userspace tools (library)
> ii  lxc-common2.0.7-
> 0ubuntu1~14.04.1skyport1 amd64Linux Containers
> userspace tools (common tools)
> ii  lxcfs 2.0.6-
> 0ubuntu1~14.04.1 amd64FUSE based
> filesystem for LXC
> ii  lxd   2.0.9-
> 0ubuntu1~14.04.1 amd64Container
> hypervisor based on LXC - daemon
> ii  lxd-client2.0.9-
> 0ubuntu1~14.04.1 amd64Container
> hypervisor based on LXC - client
> 
> Note that I’ve built the LXC libraries from source, but based on the
> current ‘ubuntu-trusty-backports’ .deb packages.
> 
> regards,
> Ben
> 
> 

I think you'll have to use either a privileged container or use
squashfuse and set privileges for fuse (if still needed).

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] near-live migration

2017-05-31 Thread Kees Bos
Ah, I see.

https://github.com/lxc/lxd/issues/3326

It looks like that's a prerequisite for something like
lxc move c1 s-tollana: --stateless
on a running container.


On di, 2017-05-30 at 12:15 -0400, Ron Kelley wrote:
> While not a direct answer, I filled an enhancement bug recently for
> this 
> exact topic (incremental snapshots to remote server). The enhancement
> was 
> approved, but I don't know when it will be included in the next LXD
> version.
> 
> 
> On May 30, 2017 11:52:34 AM Kees Bos  wrote:
> 
> > 
> > Hi,
> > 
> > Right now I'm using the sequence 'stop - move - start' for
> > migration of
> > containers (live migration fails too often).
> > 
> > The 'move' step can take some time. I wonder if it would be easy to
> > implement/do something like:
> >   - prepare move (e.g. take snapshot an copy upto snaphot)
> >   - stop
> >   - copy the rest
> >   - remove snapshot on dst
> >   - remove container from src
> >   - start container on dst
> > 
> > That would minimize downtime without the complexity of a live
> > migration.
> > 
> > What are your thoughts?
> > 
> > Kees
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users