Re: [lxc-users] LXC Memory LImits

2020-11-09 Thread Fajar A. Nugraha
On Mon, Nov 9, 2020 at 3:35 PM Harald Dunkel  wrote:
>
>
> On 11/4/20 11:30 AM, Atif Ghaffar wrote:
> >
> > I find this document useful for resource limits.
> >
> > https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/ 
> > 
> >
>
> Very helpful indeed, but since we are at lxd 4.7 now I wonder
> if this blog series could be updated?

https://lxd.readthedocs.io/en/latest/instances/

Most of the introduction in the blog is still valid, while newer lxd
versions documented in the link above (usually) add more features. In
the case of memory limits, the newer features only relate to VM.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] ghost services on LXC containers

2020-09-09 Thread Fajar A. Nugraha
On Thu, Aug 13, 2020 at 5:47 PM Harald Dunkel  wrote:
>
> On 8/13/20 12:32 PM, Fajar A. Nugraha wrote:
> > Try (two times, once inside the container, once inside the host):
> > - cat /proc/self/cgroup
> > - ls -la /proc/self/ns
>
> On the host:
>
> root@il08:~# cat /proc/self/cgroup
> 13:name=systemd:/
> 12:rdma:/
> 11:pids:/
> 10:perf_event:/
> 9:net_prio:/
> 8:net_cls:/
> 7:memory:/
> 6:freezer:/
> 5:devices:/
> 4:cpuset:/
> 3:cpuacct:/
> 2:cpu:/
> 1:blkio:/
> 0::/
> root@il08:~# ls -la /proc/self/ns
> total 0
> dr-x--x--x 2 root root 0 Aug 13 12:40 .
> dr-xr-xr-x 9 root root 0 Aug 13 12:40 ..
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 cgroup -> 'cgroup:[4026531835]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 ipc -> 'ipc:[4026531839]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 mnt -> 'mnt:[4026531840]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 net -> 'net:[4026531992]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 pid -> 'pid:[4026531836]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 pid_for_children -> 'pid:[4026531836]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 time -> 'time:[4026531834]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 time_for_children -> 'time:[4026531834]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 user -> 'user:[4026531837]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:40 uts -> 'uts:[4026531838]'
>
>
> Entering the container:
>
> root@il08:~# lxc-attach -n il02
> root@il02:~# cat /proc/self/cgroup
> 13:name=systemd:/
> 12:rdma:/
> 11:pids:/
> 10:perf_event:/
> 9:net_prio:/
> 8:net_cls:/
> 7:memory:/
> 6:freezer:/
> 5:devices:/
> 4:cpuset:/
> 3:cpuacct:/
> 2:cpu:/
> 1:blkio:/
> 0::/
> root@il02:~# ls -la /proc/self/ns
> total 0
> dr-x--x--x 2 root root 0 Aug 13 12:42 .
> dr-xr-xr-x 9 root root 0 Aug 13 12:42 ..
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 cgroup -> 'cgroup:[4026532376]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 ipc -> 'ipc:[4026532313]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 mnt -> 'mnt:[4026532311]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 net -> 'net:[4026532316]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 pid -> 'pid:[4026532314]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 pid_for_children -> 'pid:[4026532314]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 time -> 'time:[4026531834]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 time_for_children -> 'time:[4026531834]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 user -> 'user:[4026531837]'
> lrwxrwxrwx 1 root root 0 Aug 13 12:42 uts -> 'uts:[4026532312]'
>
>
> I am not sure what this is trying to tell me, though. Is this the same
> hierarchy?

It shouldn't be. /proc/self/ns says the two has different cgroup
namespace, so even if /proc/self/cgroup look the same, they are not.

> And would you agree that this is really a bad thing to do?

If they're the same hierarchy on the same namespace, yes.
If they're on different namespace, no.

Not sure what's wrong on your setup though. Your debian bug page link
says 'No longer marked as found in versions systemd/241-7~deb10u4', so
perhaps there's that.

If this is still reproducible on systems with that (or newer) versions
of systemd, I'd suggest these to help find the root cause:
- try latest lxd from snap
- try on ubuntu host and container

I'm using ubuntu with systemd 237-3ubuntu10.20 and 245.4-4ubuntu3.1,
and dont experience your bug report.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] ghost services on LXC containers

2020-08-13 Thread Fajar A. Nugraha
On Thu, Aug 13, 2020 at 5:23 PM Harald Dunkel  wrote:
>
> On 8/13/20 9:02 AM, Harald Dunkel wrote:
> >
> > # cat /sys/fs/cgroup/unified/system.slice/zabbix-agent.service/cgroup.procs
> > 0
> > 0
> > 0
> > 0
> > 0
> > 0
> >
> >
> > PID 0 is not valid here, AFAICT. And zabbix-agent isn't even installed
> > in my container. Its installed on the host only.
> >
>
> PS:
> Lennart Pottering wrote about this:
>
> Is it possible the container and the host run in the very same cgroup
> hierarchy?
>
> If that's the case (and it looks like it): this is not
> supported. Please file a bug against LXC, it's very clearly broken.
>
> (https://lists.freedesktop.org/archives/systemd-devel/2020-August/045022.html)
>
>
> I would be highly interested in your thoughts about this.


Try (two times, once inside the container, once inside the host):
- cat /proc/self/cgroup
- ls -la /proc/self/ns

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxd-client on 20.04 focal

2020-07-12 Thread Fajar A. Nugraha
On Mon, Jul 13, 2020 at 7:50 AM Logan V.  wrote:
>
> Typically I use lxd-client in jobs that run in docker containers, so the 
> container has the lxd-client apt package installed. Now it seems that the lxd 
> and lxd-client are just shims for the snap.
>
> Since it seems like installing snaps in docker (ie. environments without 
> snapd running) is very difficult, I'm curious if any consideration has been 
> given to how lxd-client can be installed aside from snaps? Are there any ppas 
> or something I should be using instead?

What are you using lxd client for?

If it's as simple as "creating a container" or "running lxc shell/lxc
exec", IIRC old versions of lxd client (3.03 should still be available
on previous distros) can connect to newer lxd server as well

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] AppArmor denies connect operation inside container

2020-07-06 Thread Fajar A. Nugraha
On Tue, Jul 7, 2020 at 2:40 AM Joshua Schaeffer
 wrote:
>
> Looking for some help with getting slapd to be able to connect to saslauthd 
> inside an LXD container. Whenever slapd needs to connect to the socket I see 
> the following error message in the host's kernel log:
>
> Jul  6 13:27:17 host kernel: [923413.078592] audit: type=1400 
> audit(1594063637.667:51106): apparmor="DENIED" operation="connect" 
> namespace="root//lxd-container1_" profile="/usr/sbin/slapd" 
> name="/run/saslauthd/mux" pid=58517 comm="slapd" requested_mask="wr" 
> denied_mask="wr" fsuid=1111 ouid=1000
>
> I've added the following to the container config and restarted the container, 
> but I'm still seeing the same problem:
>
> lxcuser@host:~$ lxc config get container1 raw.apparmor
> /run/saslauthd/mux wr,
>
> I'm not super familiar with AppArmor and going through the docs now, but 
> thought I'd ask to see if anybody can point me in the right direction.

I'm guessing you haven't test the same slapd setup on VM/baremetal
either? Try https://bugs.launchpad.net/ubuntu/+source/openldap/+bug/1557157

Looks like the fix is in groovy's openldap already, with other
releases pending. Try editing /etc/apparmor.d/usr.sbin.slapd inside
the container

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Intermittent network issue with containers

2020-06-30 Thread Fajar A. Nugraha
On Wed, Jul 1, 2020 at 1:05 PM Joshua Schaeffer
 wrote:
> And the really odd part is that if I try to actually ping *from* the 
> container *to* my local box it works AND afterwards my original ping *from* 
> my local box *to* the container starts to work.

I had a similar problem on a vmware system some time ago. Gave up
trying to fix it (I don't manage the vmware system), implement a
workaround instead.

Its either:
- duplicate IP somewhere on your network
- your router or switch somehow can't manage arp cache for the container hosts

My workaround is to install iputils-arping (on ubuntu), and (for your
case) do something like this (preferably on a systemd service)

arping -I veth-mgmt -i 10 -b 10.2.28.1

Or you could replace it with ping, whatever works for you.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Running unprotected system container

2020-06-20 Thread Fajar A. Nugraha
On Sat, Jun 20, 2020 at 3:07 PM Fajar A. Nugraha  wrote:
>
> On Tue, Jun 16, 2020 at 6:26 PM Koehler, Yannick
>  wrote:
> >
> > Hi Fajar,
> >
> > If I use a Ubuntu image it works fine and I can run bash within the 
> > container. So I know the issue is somehow related to my imported image but 
> > I fail to understand why at this time.
> >
> > All the files in the imported tarball were uid/gid 0, I can run the 
> > /sbin/init and that script can run other binaries inside the container with 
> > no issue.  But when I try to do “exec c1 /bin/ash” in that prompt I am 
> > getting permission denied on everything, using absolute paths also didn’t 
> > work.
> >
> > I am wondering if it has to do with container being armhf while host is 
> > arm64, and somehow “exec” vs “launch/start” would fail to set things 
> > accordingly?  Or if I need to do some other tricks in my tarball?
>
> You should've mentioned arm64/armhf thing earlier.
>
> >
> > Is there a way to force install / launch an armhf ubuntu image as to 
> > validate/eliminate the armhf/arm64 variable?
>
> Try something like
>
> lxc launch --vm images:ubuntu/focal/armhf test1
>
> I haven't tested it. Might work.

I just tried it on ubuntu 20.04 arm64 on qemu. Works fine. With the
additional settings I sent earlier, ubuntu 20.04 armhf container (with
/lib/modules mounted from host) can load kernel modules.

You should probably try baby steps instead of jumping to your final
goal. If this is a copy of existing system, try adjusting it so it
runs fine as normal privileged container. Looks like you're still
having problems with that, even WITHOUT unrestricted host access and
module load issue. Fix this first. For example, if the original system
is like centos, you'd need to disable selinux.

I noticed you said "All the files in the imported tarball were uid/gid
0", but you didn't say whether the container is privileged (i.e.
security.privileged=1). Perhaps it's something that simple.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Running unprotected system container

2020-06-20 Thread Fajar A. Nugraha
On Tue, Jun 16, 2020 at 6:26 PM Koehler, Yannick
 wrote:
>
> Hi Fajar,
>
> If I use a Ubuntu image it works fine and I can run bash within the 
> container. So I know the issue is somehow related to my imported image but I 
> fail to understand why at this time.
>
> All the files in the imported tarball were uid/gid 0, I can run the 
> /sbin/init and that script can run other binaries inside the container with 
> no issue.  But when I try to do “exec c1 /bin/ash” in that prompt I am 
> getting permission denied on everything, using absolute paths also didn’t 
> work.
>
> I am wondering if it has to do with container being armhf while host is 
> arm64, and somehow “exec” vs “launch/start” would fail to set things 
> accordingly?  Or if I need to do some other tricks in my tarball?

You should've mentioned arm64/armhf thing earlier.

>
> Is there a way to force install / launch an armhf ubuntu image as to 
> validate/eliminate the armhf/arm64 variable?

Try something like

lxc launch --vm images:ubuntu/focal/armhf test1

I haven't tested it. Might work.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Running unprotected system container

2020-06-15 Thread Fajar A. Nugraha
On Mon, Jun 15, 2020 at 9:23 PM Koehler, Yannick
 wrote:
>
> I am still faced with the situation where if I run sh inside my container 
> then any command I try to execute such as /bin/ls returns permission denied.
>
> Any clue as to what I need to adjust to enable me to get inside my container 
> as to inspect and try stuff out?


Works for me. I even tested just now on ubuntu core host, with the
container using host's network interface.

Did you follow my example exactly?
Are you perhaps missing "security.privileged: 1" on the container config?

Try with the default ubuntu image (e.g. from images:ubuntu/20.04)
first, in case there's something wrong with your container rootfs.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Running unprotected system container

2020-06-12 Thread Fajar A. Nugraha
On Sat, Jun 13, 2020 at 9:41 AM Koehler, Yannick
 wrote:
>
> Hi,
>
> I am in a situation where we desire to run our old OS environment inside 
> Ubuntu Core.  So far we have identified LXD as being a candidate to enable us 
> to run our past Linux OS environment within the new one.
>
> At this time our goal is to apply the least amount of modification to our 
> existing OS in order to test and validate such an approach.
>
> I, therefore, need to run an LXC container with pretty much zero security, as 
> to allow the old OS to loads kernel modules, access /proc, /sys, etc.


> Yet, when I tried to disable seccomp using lxc.seccomp.profile = none, I 
> obtained an error as the profile 'none'  was not found by the seccomp profile 
> reader.  I am wondering if this is a problem with lxc itself or with 
> UbuntuCore not providing a definition of what a seccomp "none" profile would 
> be.

Start from 
https://discuss.linuxcontainers.org/t/lxd-raw-lxc-lxc-net-i-script-up/1131/4

Then create something like

/var/snap/lxd/common/lxd/extra/unrestricted.conf

lxc.cap.drop =
lxc.apparmor.profile = unconfined
lxc.mount.auto = proc:rw sys:rw cgroup-full:rw
lxc.cgroup.devices.allow = c *:* rwm
lxc.cgroup.devices.allow = b *:* rwm
lxc.seccomp.profile = /var/snap/lxd/common/lxd/extra/unrestricted-seccomp.conf


/var/snap/lxd/common/lxd/extra/unrestricted-seccomp.conf

2
blacklist
# v2 allows comments after the second line, with '#' in first column,
# blacklist will allow syscalls by default


Then put it on your lxd config
config:
  raw.lxc: lxc.include=/var/snap/lxd/common/lxd/extra/unrestricted.conf


Totally unsupported, you're on your own if something bad happens, etc.
I was able to run mknod, "losetup -a", mount, and modprobe from my
container, running lxd from snap under ubuntu 20.04 host (might be
relevant for you since ubuntu core also uses lxd from snap)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Storage pool grew larger than the host disc

2020-03-29 Thread Fajar A. Nugraha
On Mon, Mar 30, 2020 at 2:40 AM Yakov  wrote:
>
> I need to shrink the default.img some how. Please help!

Short version: you can't.

> Our production system is down, sigh.

I'm pretty sure (at least last time I tried it) there's a warning NOT
to use loopback zfs for production environment.

Your best bet is to move the img file somewhere else (e.g. using
scp/rsync from rescue system), create a new pool (another img might
also work), then send-receive used datasets only (e.g. without old
snapshots). Then put it back in.

Or add another disk, move the image there (probably the easiest way).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-25 Thread Fajar A. Nugraha
On Tue, Mar 24, 2020 at 6:22 PM Saint Michael  wrote:
>
> That scheme in my case would not work. I have two interfaces inside the 
> container, and each one talks to a different network, for business reasons. I 
> use policy-based-routing to make sure that packets go to the right places. I 
> need that the container can hold a full configuration. In my case, I use 
> ifupdown, not netplan, since my containers are for an older version of Debian.
> It is "not right" that ipvlan does not work out-of-the-box like macvlan or 
> veth. Somebody has to fix it. I cannot use macvlan because Vmware only allows 
> multiple macs if the entire network is set in promiscuous mode, and that 
> kills performance. So basically the only workaround is ipvlan. As I said, if 
> you use type=phys and ipvlan inside the host, it works fine, without altering 
> the container.


Apparently this also works, as long as you have the same ip in
container config and inside the container

Container config:
# Network configuration
lxc.net.0.name = eth0
lxc.net.0.type = ipvlan
lxc.net.0.ipvlan.mode = l3s
lxc.net.0.l2proxy = 1
lxc.net.0.link = eth0
lxc.net.0.ipv4.address = 10.0.3.222

inside the container -> normal networking config (e.g. /etc/netplan/10-lxc.yaml)
network:
  version: 2
  ethernets:
eth0:
  dhcp4: no
  addresses: [10.0.3.222/24]
  gateway4: 10.0.3.1
  nameservers:
addresses: [10.0.3.1]

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-24 Thread Fajar A. Nugraha
On Mon, Mar 23, 2020 at 11:48 PM Saint Michael  wrote:
>
> It is supported, there is no error, but there is no communication at all with 
> the gateway. If you start the same exact network configuration in the 
> container with the type=phys, it works fine, ergo, the issue is type=ipvlan.

"exact network configuration" inside the container? I'm pretty sure it
would fail.

If you read what I wrote earlier:
"
set /etc/resolv.conf on the container manually, and disable network
interface setup inside the container.
"

This works in my test (using lxc 3.2.1 from
https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/daily):
# Network configuration
lxc.net.0.name = eth0
lxc.net.0.type = ipvlan
lxc.net.0.ipvlan.mode = l3s
lxc.net.0.l2proxy = 1
lxc.net.0.link = eth0
lxc.net.0.ipv4.gateway = dev
lxc.net.0.ipv4.address = 10.0.3.222/32
lxc.net.0.flags = up


While inside the container, setup resolv.conf manually, and disable
networking setup (e.g. removing everything under /etc/netplan/ on
ubuntu should work).

Common issue with macvlan/ipvlan of "container not being able to
contact the host" would still apply.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-23 Thread Fajar A. Nugraha
On Fri, Mar 20, 2020 at 5:36 PM Saint Michael  wrote:
>
> I use plain LXC, not LXD. is  ipvlan supported?

https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Networking

2020-03-19 Thread Fajar A. Nugraha
On Thu, Mar 19, 2020 at 12:02 AM Saint Michael  wrote:
>
> The question is: how do we share the networking from the host to the 
> containers, all of if. each container will use one IP, but they could see all 
> the IPs in the host. This will solve the issue, since a single network 
> interface,  single MAC address, can be associated with hundreds of IP 
> addresses.

If you mean "how can a container has it's own ip on the same network
as the host, while also sharing the hosts's mac address", there are
several ways.

The most obvious one is nat. You NAT each host's IP address to
corresponding vms.


A new-ish (but somewhat cumbersome) method is to use ipvlan:
https://lxd.readthedocs.io/en/latest/instances/#nictype-ipvlan

e.g.:

# lxc config show tiny
...
devices:
  eth0:
ipv4.address: 10.0.3.101
name: eth0
nictype: ipvlan
parent: eth0
type: nic

set /etc/resolv.conf on the container manually, and disable network
interface setup inside the container. You'd end up with something like
this inside the container:

tiny:~# ip ad li eth0
10: eth0@if65:  mtu 1500
qdisc noqueue state UNKNOWN qlen 1000
...
inet 10.0.3.101/32 brd 255.255.255.255 scope global eth0
...

tiny:~# ip r
default dev eth0


Other servers on the network will see the container using the host's MAC

# arp -n 10.0.3.162 <=== the host
Address  HWtype  HWaddress   Flags MaskIface
10.0.3.162   ether   00:16:3e:77:1f:92   C eth0

# arp -n 10.0.3.101 <=== the container
Address  HWtype  HWaddress   Flags MaskIface
10.0.3.101   ether   00:16:3e:77:1f:92   C eth0


if you use plain lxc instead of lxd, look for similar configuration.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Unprivileged networking option?

2020-03-05 Thread Fajar A. Nugraha
On Thu, Mar 5, 2020 at 11:43 PM Ede Wolf  wrote:
>
> Hello Andrey,
>
> thanks for getting back to me. The reason for unpriviledged containers
> is basically user id separation.
>
> I fancy the idea that each container has its own id (range) and the user
> ids are not being shared between containers (and the host).
>
> So it is another level of isolation and administration - in its simplest
> form be it just using "ps" where you can tell from the user id what
> container (os) the process belongs to.

While you mentioned plain lxc instead of lxd earlier, lxd might be
more suitable for you needs.

Does https://stgraber.org/2017/06/15/custom-user-mappings-in-lxd-containers/
fit the bill?
Look for "isolated"

>
>
> More into classical os level virtualisation (jails, openvz) than  what
> is usually associated these days with the term "container".
> So there is no respawning, no stacked images, no orchestration, but a
> proper (albeit minimal) os installation. Without the overhead of a
> hypervisor.
>
> So lxc pretty much is the right tool. Would just be great if one could
> use level3 ip-vlan for easier filtering.

https://discuss.linuxcontainers.org/t/lxc-3-2-1-has-been-released/5322
Look for "ipvlan".

You could also use nested lxd, so (for example) each user have access
to their own lxd container, with isolated idmap. Inside each
container, They can create and manage their own containers.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Migrating from LXC to LXD

2020-01-22 Thread Fajar A. Nugraha
On Thu, Jan 23, 2020 at 4:22 AM Michael Eager  wrote:
>
> On 1/21/20 9:47 PM, Fajar A. Nugraha wrote:
> > On Wed, Jan 22, 2020 at 9:01 AM Michael Eager  wrote:
> >> devices:
> >> eth0:
> >>   name: eth0
> >>   nictype: macvlan
> >>   parent: br0
> >>   type: nic
> >
> >> When I try to do the same with a CentOS 8 image, it doesn't work.
> >
> > https://lists.linuxcontainers.org/pipermail/lxc-users/2019-December/015024.html
>
> Thanks.  I looked at the hoops to jump through to get a CentOS 8 image
> to work and decided to switch to Fedora.

If you can live Fedora's short life time, sure. If you don't expect to
upgrade that container in several years, choose something else.

Centos 8 is based on F28, while F29 works fine. RH would probably fix
that problem in RH/C8.1 or something.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Migrating from LXC to LXD

2020-01-21 Thread Fajar A. Nugraha
On Wed, Jan 22, 2020 at 9:01 AM Michael Eager  wrote:
> devices:
>eth0:
>  name: eth0
>  nictype: macvlan
>  parent: br0
>  type: nic

> When I try to do the same with a CentOS 8 image, it doesn't work.

https://lists.linuxcontainers.org/pipermail/lxc-users/2019-December/015024.html

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Howto save snapshots only to another drive? Bug?

2020-01-14 Thread Fajar A. Nugraha
On Tue, Jan 14, 2020 at 2:28 PM Jäkel, Guido  wrote:
>
> Dear Oliver,
>
> I just want to mention, that with this workaround it isn't a real snapshot 
> anymore but may be called a "slow bullet" instead: It will take some real 
> time to copy the whole image and in contrast to a snapshot, it will be not 
> completely consistent if the Container is running at this time.

Even if that were true, that's not due to the workaround. The "not
completely consistent" part is due to storage driver "dir", and you'll
get it even without the workaround.

I say "if that were true" because I'm not sure whether "lxc snapshot"
calls "lxc pause" first. If it did, then the backup would be
storage-level consistent.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Howto save snapshots only to another drive? Bug?

2020-01-13 Thread Fajar A. Nugraha
On Mon, Jan 13, 2020 at 3:57 PM Oliver Rath  wrote:
>
> Hi Fajar,
>
> its "dir":
>
> lxc storage show default | grep driver
> driver: dir


If it's dir, you could probably work around that by using bind mounts.
something like

rm /path/to/pool/dir/containers-snapshots <= remove the symlink first
mkdir /path/to/pool/dir/containers-snapshots <= recreate the directory
mount --bind /path/to/new/disk/containers-snapshots
/path/to/pool/dir/containers-snapshots

then test it, see whether new directories/files appear on
/path/to/new/disk/containers-snapshots

Not sure whether this is a supported method or not though.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Howto save snapshots only to another drive? Bug?

2020-01-12 Thread Fajar A. Nugraha
On Mon, Jan 13, 2020 at 5:01 AM Oliver Rath  wrote:
>
> Hi list,
>
> Im using lxd 3.18 on ubuntu 18.04. Now i realized, that my (fast) space
> of lxc-vms exhausted, so i decided to put mv my snaps to another drive
> with changing the directory "containers-snapshots" into a softlink
> redirecting the snapshots to another drive.
>
> Unfortunatly on creating a new snapshots got the message
>
> cannot create directory
> "/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots",
> file exists.
>
> I assume, that the lxc doesnt recognize, that containers-snapshots is a
> (correct) link to the corresponding files. What can I do?


What is the backing storage of your default lxd storage pool on? If
unsure, run "lxc storage show default | grep driver".

lxc snapshots uses snapshot feature of the underlying storage (if
available), so you can't simply trick it with symlinks.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Converting network from LXC to LXD

2019-12-22 Thread Fajar A. Nugraha
On Sun, Dec 22, 2019 at 1:09 AM John Lane  wrote:
>
> On 21/12/2019 16:51, John Lane wrote:
>
> >
> > I can't do this:
> >
> > $ lxc config device set mycontainer eth0 ipv4.address 192.168.21.2/24
> >Error: Invalid devices:
> > Invalid value for device option ipv4.address: Not an IPv4 address:
> > 192.168.21.2/24
> >
> > Also there appears to be no setting for gateway:
> >
> > $ lxc config device set mycontainer eth0 ipv4.gateway 192.168.21.1
> > Error: Invalid devices: Invalid device option: ipv4.gateway
> >
>
> Reading this
> (https://github.com/lxc/lxd/issues/1259#issuecomment-166416979):

On that same page:
https://github.com/lxc/lxd/issues/1259#ref-pullrequest-136128816

> I guess that it doesn't work.

If you use lxd-managed bridge (e.g. lxdbr0), you can configure it's
built-in dhcp server (dnsmasq) to allways allocate fixed ip using
something like this (in this example lxdbr0 is 10.0.3.1/24)

devices:
  eth0:
host_name: c1-0
ipv4.address: 10.0.3.221
name: eth0
nictype: bridged
parent: lxdbr0
type: nic


If it's not on lxd-managed bridged (e.g. physical interface), you can
simply configure it inside the container (which you did already), or
do something like this on container config:

config:
  raw.lxc: |-
lxc.net.0.ipv4.address=10.0.4.2/24
lxc.net.0.ipv4.gateway=10.0.4.1

... and tell the container OS to leave it alone
# cat /etc/network/interfaces
auto eth0
iface eth0 inet manual


>
> I also tried using "lxc.raw" to work around it but could not get that to
> work. I kept getting "Config parsing error: Initialize LXC: Failed to
> load raw.lxc". Does raw.lxc not work any more?

It works, but you used deprecated lxc config lines, which didn't work
starting lxc 3.0

> I don't know if there is a page accessible on the main LXD site
> documentation that explains this kind of thing which would be useful to
> help transitioning from plain-old lxc.

https://github.com/lxc/lxd/issues/4393#issuecomment-378181793
https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
https://discuss.linuxcontainers.org/t/lxc-3-0-0-has-been-released/1449
https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/487

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Converting network from LXC to LXD

2019-12-20 Thread Fajar A. Nugraha
On Fri, Dec 20, 2019 at 7:08 PM John Lane  wrote:
> I'm struggling to find documentation explaining how to configure the
> "phys" network type I use to assign a physical interface to a container
> and the "veth" network type that I use to join a container to an
> existing bridge.

> I've looked at
>
> https://linuxcontainers.org/lxd/docs/master/networks (mentions neither
> phys nor veth)

You're looking in the wrong section

> I'd appreciate some pointers towards the appropriate documentation or an
> explanation of how to do this with lxd.

https://linuxcontainers.org/lxd/docs/master/containers#type-nic

So something like this for veth on a bridge (on "lxc config edit
CONTAINER_NAME", in case you haven't figure it out):

devices:
  eth0:
name: eth0
host_name: c1-0
nictype: bridged
parent: lxdbr0
type: nic

"parent" should be whatever the bridge is called on your host (lxd
creates lxdbr0 by default).
"host_name" is what the host side of the veth will be called (very
useful if you're doing host-side traffic monitoring).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Network Manager makes RHEL 8 and Centos 8 impossible to use as a container

2019-12-16 Thread Fajar A. Nugraha
On Tue, Dec 17, 2019 at 8:24 AM Saint Michael  wrote:
>
> Network Manager makes RHEL 8 and Centos 8 impossible to conteinarize. Please 
> see that it detects a device type macvlan, when it should be really Ethernet. 
> nmcli connection up Ethernet0 Error: Connection activation failed: No 
> suitable device found for this connection (device lo not available because 
> device is strictly unmanaged).
>
> nmcli dev show GENERAL.DEVICE: eth0 GENERAL.TYPE: macvlan GENERAL.HWADDR: 
> (unknown) GENERAL.MTU: 1500 GENERAL.STATE: 20 (unavailable) 
> GENERAL.CONNECTION: -- GENERAL.CON-PATH: --


You should write details of your setup so that others can help you better.

On default setup (with lxdbr0), c8 container works just fine, so it's
definitely not 'impossible to containerize'.
# lxc launch images:centos/8 c8
Creating c8
Starting c8
# lxc shell c8
[root@c8 ~]# nmcli connection show
NAME UUID  TYPE  DEVICE
System eth0  5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03  ethernet  eth0


I'm guessing you've changed it to use macvlan. It can work, but you
need a workaround, using the old network-scripts, and lxdbr0 bridged
mode temporarily (or whatever you need to be able to run "yum install"
on the container)

yum -y install network-scripts
systemctl enable network
sed -i 's/TYPE=Ethernet/TYPE=Generic/g'
/etc/sysconfig/network-scripts/ifcfg-eth0

change container config to use macvlan, then reboot the container.


The correct fix should probably be in network manager, so that it
could work with existing macvlan device, or have the option to
override detected device type.

--
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Docker in unprivileged LXC?

2019-11-20 Thread Fajar A. Nugraha
On Wed, Nov 20, 2019 at 6:41 PM Dirk Geschke  wrote:
>
> Hi Oliver,
>
> > afaik:
> >
> > security.nesting: "true"
> >
> > makes the container automatically privileged...

no. it still runs using mapped unprivileged u/gid, but allows
additional capabilities (e.g. overlay mounts, etc)

# cat /proc/1/uid_map
 0100 10

# docker run --rm -it hello-world
...
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
...


>
> half-and-half, I guess. But I asked for LXC not LXD...

I don't use lxc anymore (only lxd now), but you might be able to use
https://github.com/lxc/lxc/blob/stable-3.0/config/templates/nesting.conf.in

you can either include it (there should be an example from
ubuntu/download template), or write the configs directly on your
container config.

> However, if I start the container half unprivileged (starting
> as root but using uid/gid mapping) it seems to work. So probably
> that is the way to go here...
>
> Not ideally, but more secure then pure docker on the hardware...

Were you able to start the container? AFAIK you shouldn't be able to.
It's good if you can.

Another note from my experience, if you use zfs as container storage,
you need additional configuration for performance as docker will use
vfs driver by default instead of overlay/aufs. ext4/xfs/btrfs should
be fine as-is though.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Error: websocket: close 1006 (abnormal closure): unexpected EOF

2019-09-19 Thread Fajar A. Nugraha
On Fri, Sep 20, 2019 at 4:14 AM Tomasz Chmielewski  wrote:
>
> Ubuntu 18.04, lxd installed from snap.
>
> Very often, a "lxc shell container" or "lxc exec container some-command"
> session gets interrupted with:
>
> Error: websocket: close 1006 (abnormal closure): unexpected EOF
>
>
> I suppose this happens when lxd snap gets auto-updated (i.e. today, from
> lxd ver 3.17, rev. 11964 to rev. 11985).
>
>
> This is quite annoying and leads to various errors, including data loss:
>
> - some long-running jobs within "lxc shell container" console get
> interrupted when this happens
>
> - script/jobs/builds running as "lxc exec container some-command" also
> get interrupted when this happens
>
>
> Is there a way to prevent this?

If you're asking "how to make your jobs run uninterrupted", AFAIK the
best way might be either:
- use ssh (with keys) to connect from the host, or
- wrap your jobs with something that can run in the background.

For example, using "screen" and "script", you can do something like
lxc exec my-container -- screen -dmS my-program-1 script -f
/path/to/my-program-1.log -c "/path/to/my-long-running-program.sh arg1
arg2"

you can attach to its session using "screen -r my-program-1" (e.g. in
case you need to interefere or type some input manually), and all
output will be logged in my-program-1.log


If you're asking 'how to keep "lxc exec session running when lxd is
restarted", then it's not possible.

If you're asking 'how to prevent snapd from auto-updating lxd", then
AFAIK there's no official way to do this permanently. It's unlikely it
will be implemented either, due to snap design. The closest hack you
can get is probably
https://discuss.linuxcontainers.org/t/disable-snap-auto-refresh-immediately/5333

You could probably also try
https://discuss.linuxcontainers.org/t/disable-snap-auto-refresh-immediately/5333/6
and use something like "snap install lxd --channel=3.17/stable". That
should at least lock you to lxd-3.17, limiting restarts to bug fix
only, and not automatically updating to 3.18+.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Snashot left behind after deleting container?

2019-09-18 Thread Fajar A. Nugraha
On Wed, Sep 18, 2019 at 10:15 PM Lai Wei-Hwa  wrote:

> I don't see it listed when using:* lxc storage volume list*
>
> But it does appear to be a snapshot. How was this generated? Why is it
> there? I see others that have similar naming conventions (ending in a
> number string that I didn't create).
>


So you're asking "what is on my system" to strangers on the internet,
rather than asking your (previous) sysadmin? Right ...



> How do I properly remove this without causing any unintended consequences?
>
>
Quick google search points to
https://ubuntu.com/blog/lxd-2-0-your-first-lxd-container (look for
"Snapshot management")
https://discuss.linuxcontainers.org/t/lxd-3-8-has-been-released/3450
https://discuss.linuxcontainers.org/t/lxd-3-11-has-been-released/4245

Newer lxd versions put snapshots under a diffierent directory (e.g.
/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/), so
my guess is you're probably running an old version (2.x?).

Also, deleting containers should also delete its snapshot (tested on lxd
3.17).  There is "snapshots.pattern" container configuration, but it won't
help much since in your case you already deleted the container. So (again)
my guess is the snapshot is something created by an additional tool
(external to lxd. A manual or scheduled btrfs snapshot, perhaps?). In which
case your (former) sysadmin should know more.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Security gain: Start Unpriviledged container as root or as regular user?

2019-08-19 Thread Fajar A. Nugraha
On Sun, Aug 18, 2019 at 5:36 PM Georg Gast  wrote:

> Hi,
>
> i use currently unprivileged lxc containers on debian buster started as
> root. I use for every container a separate set of uid/gids.
>
>


> Debian Buster uses LXC 3.1.0
>
> Is in this setup any security gained, if the containers are started as a
> separate user different that root on the host?
>
>

In general, yes. It should at least protect you from possible security
issues in lxc-monitor.

However even if you do that, IIRC some processes still need to run as root
(or with suid binary), e.g. lxcfs and lxc-user-nic. So you'd still be
vulnerable if there are security issues in those processes.



> I would prefer to start them as root from /var/lib/lxc as a simple
> lxc.auto.start = 1 let them be started at system boot.
>
>
Generally you'd choose a mix between acceptable levels of ease -
performance - security.

Personally, for your usecase, instead of using lxc directly, I recommend
you install snapd (and lxd from snap package) or build lxd yourself (if you
don't want to use snap). Use suitable storage backend (e.g. zfs/btrfs/lvm).
Then enable security.idmap.isolated. This way you still get separate u/gids
per container while enabling automation for some container administration
process (e.g assigning u/gids, autostart, copying/backing up containers,
etc).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Unprivileged account(s)

2019-07-24 Thread Fajar A. Nugraha
On Thu, Jul 25, 2019 at 6:30 AM Narcis Garcia  wrote:

> Hello, I've been creating LXC containers in a dedicated user account.
> I need to know if this is a good practice, instead of dedicating
> different user account per each unprivileged container.
>
>
It depends on what you're trying to achieve.

LXD does the same thing by default, but can be easily changed to
per-container mapping. See
https://stgraber.org/2017/06/15/custom-user-mappings-in-lxd-containers/

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] limits.memory - possible to set per group of containers?

2019-06-17 Thread Fajar A. Nugraha
On Tue, Jun 18, 2019 at 7:47 AM Tomasz Chmielewski  wrote:

> Let's say I have a host with 32 GB RAM.
>
> To make sure the host is not affected by any weird memory consumption
> patterns, I've set the following in the container:
>
>limits.memory: 29GB
>
> This works quite well - where previously, several processes with high
> memory usage, forking rapidly (a forkbomb to test, but also i.e. a
> supervisor in normal usage) running in the container could make the host
> very slow or even unreachable - with the above setting, everything (on
> the host) is just smooth no matter what the container does.
>
> However, that's just with one container.
>
> With two (or more) containers having "limits.memory: 29GB" set - it's
> easy for each of them to consume i.e. 20 GB, leading to host
> unavailability.
>
> Is it possible to set a global, or per-container group "limits.memory:
> 29GB"?
>
> For example, if I add "MemoryMax=29G" to
> /etc/systemd/system/snap.lxd.daemon.service - would I achieve a desired
> effect?
>
>
>

You could probably just use nested lxd instead:
https://stgraber.org/2016/12/07/running-snaps-in-lxd-containers/

Set the outer container memory limit to 29GB, and put other containers
inside that one.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXC-3.1 - console disfunctional

2019-05-29 Thread Fajar A. Nugraha
On Wed, May 29, 2019 at 10:37 PM  wrote:

> Hi Oliver!
>
> Thanks, but I am using Debian, Buster!
>
> Just to see, if this depends on Buster, I created
> a new container using Stretch, but it behaves the same.
>
> But there error at the end of the createion process:
>
> update-rc.d: error: cannot find a LSB script for checkroot.sh
> update-rc.d: error: cannot find a LSB script for umountfs
> update-rc.d: error: cannot find a LSB script for hwclockfirst.sh
> Creating SSH2 RSA key; this may take some time ...
> 2048 SHA256:SYJi57SfMAjyplQUSjvkHGooFEooE7yQmQBM/Vzgwcw root@medio-rep
> (RSA)
> Creating SSH2 ECDSA key; this may take some time ...
> 256 SHA256:MHD1GNwlk1EpKOLXKdE6nuIXDm7FgdwVIDXD/Ejcm90 root@medio-rep
> (ECDSA)
> Creating SSH2 ED25519 key; this may take some time ...
> 256 SHA256:cNmkkLLAFbCvJR0FBL9VL020dMbcvLd0aI4mdeQSFRI root@medio-rep
> (ED25519)
> invoke-rc.d: could not determine current runlevel
> invoke-rc.d: policy-rc.d denied execution of start.
>
> They don’t tell me, if this is part of the reason.
> The mentioned scripts are not on my install.
>
>
That looks like the debian template. Try the download template instead.

 # DOWNLOAD_KEYSERVER=ipv4.pool.sks-keyservers.net. lxc-create -n
buster-test -t download -- -d debian -r buster -a amd64
Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Debian buster amd64 (20190529_05:24) container.

To enable SSH, run: apt install openssh-server
No default root or user password are set by LXC.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-28 Thread Fajar A. Nugraha
On Wed, May 29, 2019 at 10:44 AM Saint Michael  wrote:

> The Achilles' heel is the type of CPU. I had to recompile my app once I
> moved it to an older CPU. Nothing is portable 100%.
> I guess nothing allows you to get rid of the developer at the end of the
> day.
>

If you compile it yourself, you should be able to pick the oldest cpu to
support and use appropriate optimization options for that cpu:
https://stackoverflow.com/a/10650010

If for whatever reason you can't modify the compile flag, I believe
virt-manager/kvm has the option to choose what cpu to present to the host
(it can pretend to be an older cpu). So you could probably set kvm to
present the oldest cpu you want to support, and compile it there.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-28 Thread Fajar A. Nugraha
On Tue, May 28, 2019 at 8:18 PM Saint Michael  wrote:

> Thanks for the F grade.
> In telecommunications, there is a special kind of software apps called
> switches, which actually involve dozens of apps, scripts, etc. That kind of
> complexity is only packageable in a container.
>

It's a matter of choice. AT&T seems to go with VMs and openstack:
https://www.theregister.co.uk/2016/01/08/att_expanding_sdn/

That being said, https://anbox.io/ does something similar with you: they
package complex apps using lxc. In their case, they use snaps for
distribution, and then use lxc internally.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-27 Thread Fajar A. Nugraha
On Tue, May 28, 2019 at 12:39 PM Saint Michael  wrote:

> This
> "host and container can't have services run on the same port (e.g. if you
> want sshd on both host and container, you need to change the listening port
> for one of them)"
> is untrue.
> each container in my case has a different IP address, the host has another
> one, and I run SSHD inside each container just fine.
>
>
That is indeed the case for normal container setup. However you repeatedly
said you want to be able to set net.core.rmem_max (and friends) from inside
the container, which requires a not-normal setup.

If you want to be able to do that from inside the container, you need the
container to share host networking (lxc.net.0.type = none). It comes with
its own consequences, thus the warnings above.

If you want to keep having separate ip for the host and container, then you
can't set net.core.rmem_max from inside the container. However, as someone
point out earlier, you can simply setup passwordless ssh, and have
container set it using ssh to the host during boot time.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-27 Thread Fajar A. Nugraha
On Mon, May 27, 2019 at 8:11 PM Saint Michael  wrote:

> I thought I did start the containers as privileged:
>
> lxc.include = /usr/share/lxc/config/ubuntu.common.conf
> lxc.mount.auto=
> lxc.mount.auto=proc:rw sys:rw cgroup:rw
> lxc.apparmor.profile=unconfined
> lxc.tty.max = 10
> lxc.pty.max = 1024
> lxc.cgroup.devices.allow = c 1:3 rwm
> lxc.cgroup.devices.allow = c 1:5 rwm
> lxc.cgroup.devices.allow = c 5:1 rwm
> lxc.cgroup.devices.allow = c 5:0 rwm
> lxc.cgroup.devices.allow = c 4:0 rwm
> lxc.cgroup.devices.allow = c 4:1 rwm
> lxc.cgroup.devices.allow = c 1:9 rwm
> lxc.cgroup.devices.allow = c 1:8 rwm
> lxc.cgroup.devices.allow = c 136:* rwm
> lxc.cgroup.devices.allow = c 5:2 rwm
> lxc.cgroup.devices.allow = c 254:0 rwm
> lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
> lxc.cgroup.devices.allow = b 7:* rwm# loop*
> lxc.cgroup.devices.allow = c 10:229 rwm #fuse
> lxc.cgroup.devices.allow = c 10:200 rwm #docker
> lxc.cgroup.devices.allow= a
> lxc.cap.drop=
> lxc.cgroup.devices.deny=
> lxc.autodev= 1
> lxc.hook.autodev = sh -c 'mknod ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229'
>


Following Stephane's suggestion works on my test vm. You didn't do so, thus
it didn't work.

###
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = x86_64

# Container specific configuration
lxc.rootfs.path = dir:/var/lib/lxc/c7-ul/rootfs
lxc.uts.name = c7-ul

lxc.net.0.type = none
lxc.mount.auto=
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.apparmor.profile=unconfined
###

###
c7-ul / # sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/net.conf ...
net.core.rmem_max = 67108864
net.core.wmem_max = 33554432
net.core.rmem_default = 31457280
net.core.wmem_default = 31457280
* Applying /etc/sysctl.conf ...

c7-ul / # cat /proc/sys/net/core/rmem_max
67108864
###


Of course as warned earlier, host networking brings along some quirks. For
instance:
- host and container can't have services run on the same port (e.g. if you
want sshd on both host and container, you need to change the listening port
for one of them)
- do not configure networking on the container (ONBOOT=no should be enough
on your container's eth confi)
- absolutely do not run "reboot", "init 6", or "poweroff" on the container.
At the very least, it will cause hosts's eth0 to go down. "reboot -f" on
the container should work nicely though.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-26 Thread Fajar A. Nugraha
On Sun, May 26, 2019 at 9:18 AM Saint Michael  wrote:

> I am fine with having full interaction with the host. The host does not do
> anything, it is like a glove for my app, which uses UDP very intensely,
> like 500 Mbits per second. I need to fine-tune all its parameters.
>
>
>
> On Sat, May 25, 2019 at 9:52 PM Stéphane Graber 
> wrote:
>
>> If your container doesn't need isolated networking, in theory using the
>> host namespace for the network would cause those to show back up, but
>> note that sharing network namespace with the host may have some very
>> weird side effects (such as systemd in the container interacting with
>> the host's).
>>
>>
What distro are you using for the containers?

If both container and host are using systemd, then as Stephane point out
bad things might happen when you use host networking.

If your container is using something different (e.g. devuan, which uses
sysvinit), or if the container is using a custom init (e.g. init basically
only runs your app like what is usually done with docker, or if you use
supervisord), then it should be fine.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Looking for LXD Container with AWS CDN Experience?

2019-03-28 Thread Fajar A. Nugraha
On Mon, Mar 25, 2019 at 3:31 AM Ray Jender  wrote:

> If there is anyone experienced with using the Amazon Cloudfront  with an
> LXD container, I could really use a little help!
>
> Please let me know.
>
>

Did you also wrote this?
https://serverfault.com/questions/959216/aws-cloudfront-help-please

>From the limited info you provided, it really doesn't matter whether you
use LXD or not. What matter is "how to use AWS Cloudfront to distribute a
live HLS stream". You might have better luck asking them or their forum.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-02-25 Thread Fajar A. Nugraha
On Mon, Feb 25, 2019 at 5:20 PM Stéphane Graber 
wrote:

> snapd + LXD work fine on CentOS 7, it's even in our CI environment, so
> presumably the same steps should work on RHEL 7.
>
>
Awesome !

> In the past I've built private RPMs for lxd on centos. It became a hassle
> though as (for example) I need to port additional packages as well. And I
> needed to change the kernel to a newer one, unsupported by centos. But it
> works.
> >
> > So if you're willing to build from source, it should still work.
>

I forgot to mention that this was on an ancient centos 6, not 7 :)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-02-25 Thread Fajar A. Nugraha
On Mon, Feb 25, 2019 at 3:15 PM Harald Dunkel 
wrote:

> On 2/25/19 4:52 AM, Fajar A. Nugraha wrote:
> >
> > snapcraft.io <http://snapcraft.io/> is also owned by Canonical.
> >
> > By using lxd snap, they can easly have lxd running on any distro that
> already support snaps, without having to maintain separate packages.
> >
>
> The problem is that there is no standard for all "major" distros,
> as this discussion shows:
>
> https://www.reddit.com/r/redhat/comments/9lbm0c/snapd_for_rhel/
>
>
You mean "RHEL doesn't have snapd"? You'd have to ask redhat then.


> Debian already has an excellent packaging scheme.


Sure.

The question now is "is anybody willing to maintain debian lxd packages"


> The RPM world
> doesn't follow snapd, as it seems.


Really?
https://docs.snapcraft.io/installing-snap-on-fedora/6755


> And if you prefer your favorite
> tool inside a container you can find docker images everywhere.
>
> A few years ago compatibility was achieved on source code level.
> Sorry to say, but you lost that for lxd. And snaps are not a
> replacement.
>
>
In the past I've built private RPMs for lxd on centos. It became a hassle
though as (for example) I need to port additional packages as well. And I
needed to change the kernel to a newer one, unsupported by centos. But it
works.

So if you're willing to build from source, it should still work.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-02-24 Thread Fajar A. Nugraha
On Sat, Feb 23, 2019 at 9:46 PM Richard Hector 
wrote:

> Hi all,
>
> I see that lxd in ubuntu cosmic and disco is a transitional package for
> snap - I see that lxd can be used for snap packages, but they're not the
> same thing, right?
>
>
https://lists.ubuntu.com/archives/ubuntu-devel/2018-August/040455.html
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1788040


> And Debian buster (even sid) still doesn't have lxd at all.
>
>
They have snapd. Which you can then use to instal lxd.


> Is lxd not the future of lxc after all? At least in debian-based
> distros? Or is it expected (by ubuntu) that we only use lxc for snap
> packages?
>
>
You mean "use lxc from snap packages"?
Read the two links above.


> I'm currently using lxc on debian, but wondering what happens next ...
>
> This all seems odd since the linuxcontainers site says the project is
> sponsored by Canonical ...
>
>
snapcraft.io is also owned by Canonical.

By using lxd snap, they can easly have lxd running on any distro that
already support snaps, without having to maintain separate packages.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Its possible to add a xen virtual device to an lxc container?

2019-01-24 Thread Fajar A. Nugraha
On Fri, Jan 25, 2019 at 9:27 AM Márcio Castro 
wrote:

> Hi,
>
> My Ubuntu is 18.04, with lxc version 3.0.3.
>
> I installed several Oracle products (Database, Weblogic, SOA, ODI) with
> success in Linux Contaners, and i'm want to try Oracle RAC.
>
> To do that, I want to create a virtual disc, but i cant attach this to the
> container.
>
> I created a container named "ol7testedisc" with Oracle Linux and done the
> following:
>
> mmoura@mmoura-W350STQ-W370ST:~$ sudo dd if=/dev/zero of=/dev/xvda bs=1M
> count=4096
>
> 4096+0 registros de entrada
>
> 4096+0 registros de saída
>
> 4294967296 bytes (4,3 GB, 4,0 GiB) copiados, 1,01326 s, 4,2 GB/s
>
>
Quick guess, /dev/xvda did not exist before you ran dd?


> mmoura@mmoura-W350STQ-W370ST:/dev$ ls -l xvda
>
> -rw-r--r-- 1 root root 4294967296 jan 25 00:01 xvda
>
> mmoura@mmoura-W350STQ-W370ST:/dev$ lxc config device add ol7testedisc
> xvda unix-block path=/dev/xvda
> Error: Invalid devices: Not a device
>
>
Do you know the difference between a file and block device? If my guess is
correct, you have just created a regular file /dev/xvda. That is not a xen
virtual device.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxd access from host to container rootfs

2018-12-09 Thread Fajar A. Nugraha
On Wed, Dec 5, 2018 at 9:03 PM Ingo Baab  wrote:

> Hello All,
> how can I access the LXD/LXC containers rootfs from the host system?
> (if I am using ubuntu18.04 with snap lxc --version 3.7 on loopback-ZFS)
>
> On other (real ZFS-based and U16.04) server I can access:
>
>  "/var/lib/lxd/containers/{$containername}/rootfs/"
>
> Any hints?
> Ingo
>
>

I believe newer versions of lxd:
- keeps the zfs datasets unmounted by default
- only mounts it on the host when container is starting
- unmount it again from the host, so it's only mounted on the container

You should be able to do this on the host:
- zfs list
- zfs mount ...

for example, in my case the dataset name is something
like data/lxd/containers/c1, and it can be mounted (with 'zfs mount')
on /var/snap/lxd/common/lxd/storage-pools/default/containers/c1.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Does cpu cgroup has been enabled in lxc/lxd

2018-11-02 Thread Fajar A. Nugraha
On Fri, Nov 2, 2018 at 8:44 AM, kemi  wrote:

>
> thx for your question.
> In our case, our customers want to run android games within containers on
> cloud.
>

It might be possible for you to adjust https://anbox.io/ to run on lxd
instead of lxc. YMMV.

There are two problems we have known.
> The first one occurs during Android OS boot, the coldboot of Android
> requires to
> write uevent file in /sys, this will trigger an uevent broadcast to all of
> listeners
> (udev daemons) in user space (this uevent is sent from kernel via
> netlink),
> with the increase of container number (200+), we found the boot latency
> has
> reached 1~2 mins. And latency would be intolerable when the number reaches
> 500.
>
>
I don't see udev running inside it's lxc container, so perhaps they've
managed to solve that issue


The second one occurs when an app in container begins to run, it will read
> /sys/devices/system/cpu/online file to get avilable cpu number before
> creating
> threads accordingly. Then. the problem is,  sysfs now is shared with host,
> it will get the CPU number equals to host thread number even if the cpu
> number
> of container is limited.
>
>
If it simply reads the file, you could simply mount a text file on it.
Similar to what lxcfs does, but simpler.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Does cpu cgroup has been enabled in lxc/lxd

2018-11-01 Thread Fajar A. Nugraha
On Thu, Nov 1, 2018 at 3:04 PM, kemi  wrote:

>
> The reason why I have not tried it is there is no available android image
> provided on existed
> images server for LXD container. Do you know something about that?
>

I don't believe anybody has succesfully run android in lxd yet (sucess as
in "you can use vnc or similar to view the screen"). Perhaps porting a
working docker setup is a good start (I assume this is not you or your
team, apologies if you're already familiar with it) :
https://github.com/butomo1989/docker-android

There were some hints on debugging android on lxd in the list archive:
https://discuss.linuxcontainers.org/t/how-to-debug-container-boot-on-lxd/849
, which might be relevant to what you want.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Does cpu cgroup has been enabled in lxc/lxd

2018-11-01 Thread Fajar A. Nugraha
On Thu, Nov 1, 2018 at 2:16 PM, kemi  wrote:

>
>
> On 2018/11/1 下午2:53, Fajar A. Nugraha wrote:
> > On Thu, Nov 1, 2018 at 1:38 PM, kemi  wrote:
> >
> >>>> g) and h) read files from /proc, not cgroup. You need lxcfs. You
> should
> >>>> already have that on ubuntu though.
> >>>>
> >>>>
> >>
> >> /proc/cpuinfo also matches the expected result.
> >> However, it seems that sysfs in container  still shares with host /sys
> >> file system.
> >> Right?
> >>
> >>
> >>
> > Correct. See https://linuxcontainers.org/lxcfs/introduction/
> >
>
> OK, then I have a question on scalability and security issues on running
> multiple containers.
>
> Background: Our customers hope to run hundreds or even thousands of
> containers in their production environment.
>
> Sharing sysfs of containers with host sysfs in lxc/lxd may have:
> a) security issue.
> If a malicious program in a container changes a sensitive file in /sys,
> e.g. reduce CPU frequency, does it really works? Does it affect other
> running containers?
>
>

Why don't you try it and see :)

Even privileged container should get something like this

# echo 100 > /sys/devices/system/cpu/cpufreq/policy1/scaling_min_freq
-su: /sys/devices/system/cpu/cpufreq/policy1/scaling_min_freq: Read-only
file system


There were some known security issues with /sys in the past (not cpufreq
though), but even back then it should be non issue for the default lxd
containers, which are unprivileged.



b) Scalability issue.
> E.g. During launching a ubuntu OS(not kernel) or Android OS in a
> container,it usually use udev/ueventd
> to manage their device. This device manager daemon will read or write
> uevent file in /sys, the kernel
> then broadcast a uevent to all the listeners(udev daemon) via netlink, if
> there are already hundreds
> of containers in the system, all of udev daemons need to deal with it, it
> would lead to a long boot
> latency which we have observed in docker.
>
>
LXD containers don't use udev.



> Anyway to fix that?
>


Try it, and if you find anything wrong, ask.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Does cpu cgroup has been enabled in lxc/lxd

2018-10-31 Thread Fajar A. Nugraha
On Thu, Nov 1, 2018 at 1:38 PM, kemi  wrote:

> >> g) and h) read files from /proc, not cgroup. You need lxcfs. You should
> >> already have that on ubuntu though.
> >>
> >>
>
> /proc/cpuinfo also matches the expected result.
> However, it seems that sysfs in container  still shares with host /sys
> file system.
> Right?
>
>
>
Correct. See https://linuxcontainers.org/lxcfs/introduction/

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Does cpu cgroup has been enabled in lxc/lxd

2018-10-31 Thread Fajar A. Nugraha
On Thu, Nov 1, 2018 at 8:55 AM, kemi  wrote:

> Hi, Everyone
>I am new comer of LXC/LXD community, and want to run a container on a
> limited cpu set.
>
>   The followings are my steps:
>   a) lxd init
>   b) lxc launch Ubuntu:18.04 first
>   c) lxc stop first
>   d) lxc config set first limits.cpu 0  // set container running on CPU 0
>

I'm not sure, but I believe "0" here means all cpu, and not pin to cpu 0?

Try changing this to "1", "0-0", and "1-2". Observe the difference.


>   e) lxc start first
>   f) lxc exec first -- bash
>   g) nproc   // the expected result would be 1, however, it still equals
> to cpu number of host
>   h) ls /sys/devices/system/cpu   // the expected result should only
> include cpu0 directory, however, it's not
>
>

g) and h) read files from /proc, not cgroup. You need lxcfs. You should
already have that on ubuntu though.



> So, it seems that CPU cgroup has not been enabled in LXC/LXD, right?  The
> version of lxc is 2.0.11 on Ubuntu 16.04.
> Anyone can help me on that, thx very much.
>

If this is a new install, I highly suggest you just switch to ubuntu 18.04
+ lxd-3 host.
If you simply want to have "correct" /proc entries, make sure lxcfs is
installed, and then restart lxd (if needed).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd under stretch

2018-09-24 Thread Fajar A. Nugraha
On Tue, Sep 25, 2018 at 1:34 AM, Pierre Couderc  wrote:

>
>
> On 09/24/2018 10:20 AM, Andrey Repin wrote:
>
>>
>> If you are asking such questions, you definitely should not build anything
>> yourself.
>>
>> Thank you for you efficient  answer that I definitely intend not to
> follow ;)
> Maybe my question is not very subtle. But you could answer me something
> like
> http://archive.ubuntu.com/ubuntu/pool/main/l/lxc/lxc_3.0.1-
> 0ubuntu1~18.04.2.debian.tar.xz
> or at least confirm me if it is a correct answer ?
>
>
>
I'd recommend you try lxd snap first instead of building yourself.
https://packages.debian.org/snapd
https://snapcraft.io/lxd

If it doesn't fit your requirement and you still need to build it yourself,
try
https://github.com/lxc/lxd#installing-lxd-from-source
https://github.com/lxc/lxd/releases/tag/lxd-3.5

If you've followed the above and still have problems, it'd help if you
write in detail what those problems are (i.e. not just "instabilities")

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Error: failed to begin transaction: database is locked

2018-09-12 Thread Fajar A. Nugraha
On Wed, Sep 12, 2018 at 9:33 PM, Kees Bakker  wrote:

> Hey,
>
> This with a LXD/LXC on a Ubuntu 18.04 server. Storage is done
> with LVM. It was installed as a cluster with just one node.
> It was also added as remote for three other LXD servers (all Ubuntu 16.04
> and LXD 2.0.x). These old servers have BTRFS storage.
>
>
Only added as remote? not lxd clustering (
https://lxd.readthedocs.io/en/latest/clustering/)?



> Suddenly I cannot do any lxc command anymore. They all give
>
> Error: failed to begin transaction: database is locked
>
> In /var/log/lxd/lxd.log it prints the following message every 10 seconds
>
> lvl=warn msg="Failed to get current raft nodes: failed to fetch raft
> server address: failed to begin transaction: database is locked"
> t=2018-09-12T16:28:44+0200
>
> Extra information. This afternoon I have upgraded one of the "old" servers
> to LXD 3.0 (from xenial-backports). This was triggered by the problems we
> have with a container in ERROR state and a kworker at 100% cpu load.
>


Do package versions on upgraded servers match? i.e. all lxd, liblxc1, etc
all 3.0 from xenial-backports, without any 2.x or ppa packages mixed in?

Have you restart lxd on the upgraded server?

If you temporarily move ~/.config/lxc somehere else (to "remove" all the
remotes, among other things), does lxc command work?

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to recover from ERROR state

2018-09-12 Thread Fajar A. Nugraha
On Wed, Sep 12, 2018 at 4:08 PM, Kees Bakker  wrote:

> On 12-09-18 10:51, Fajar A. Nugraha wrote:
>
> On Wed, Sep 12, 2018 at 3:14 PM, Kees Bakker  wrote:
>
>> On 11-09-18 21:56, Andrey Repin wrote:
>> > Greetings, Kees Bakker!
>> >
>> >> ii  lxc-common 2.0.8-0ubuntu1~16.04.2  amd64Linux
>> Containers userspace tools (common tools)
>> >> ii  lxcfs  2.0.8-0ubuntu1~16.04.2  amd64FUSE based
>> filesystem for LXC
>> >> ii  lxd2.0.11-0ubuntu1~16.04.4 amd64Container
>> hypervisor based on LXC - daemon
>> >> ii  lxd-client 2.0.11-0ubuntu1~16.04.4 amd64Container
>> hypervisor based on LXC - client
>> > Upgrade from PPA.
>> > add-apt-repository ppa:ubuntu-lxc/stable
>> >
>> > ii  lxc-common 2.1.1-0ubuntu1 amd64  Linux Containers
>> userspace tools (common t
>> > ii  lxc-templates  2.1.1-0ubuntu1 amd64  Linux Containers
>> userspace tools (template
>> > ii  lxc1   2.1.1-0ubuntu1 amd64  Linux Containers
>> userspace tools
>> > ii  lxcfs  2.0.8-1ubuntu2 amd64  FUSE based
>> filesystem for LXC
>> >
>> >
>>
>> Hmm. That PPA does not have liblxc1 2.1.1, but 3.0.1
>>
>> # apt list --upgradable
>> Listing... Done
>> liblxc1/xenial 3.0.1-0ubuntu1~18.04.2~ubuntu16.04.1~ppa1 amd64
>> [upgradable from: 2.0.8-0ubuntu1~16.04.2]
>> libseccomp2/xenial 2.3.1-2.1ubuntu3~ubuntu16.04.1~ppa1 amd64 [upgradable
>> from: 2.3.1-2.1ubuntu2~16.04.1]
>> lxc-common/xenial 2.1.1-0ubuntu1~ubuntu16.04.1~ppa1 amd64 [upgradable
>> from: 2.0.8-0ubuntu1~16.04.2]
>> lxcfs/xenial 3.0.1-0ubuntu2~18.04.1~ubuntu16.04.1~ppa1 amd64 [upgradable
>> from: 2.0.8-0ubuntu1~16.04.2]
>>
>> # apt policy liblxc1
>> liblxc1:
>>   Installed: 2.0.8-0ubuntu1~16.04.2
>>   Candidate: 3.0.1-0ubuntu1~18.04.2~ubuntu16.04.1~ppa1
>>   Version table:
>>  3.0.1-0ubuntu1~18.04.2~ubuntu16.04.1~ppa1 500
>> 500 http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu
>> xenial/main amd64 Packages
>>  3.0.1-0ubuntu1~16.04.2 100
>> 100 http://nl.archive.ubuntu.com/ubuntu xenial-backports/main
>> amd64 Packages
>>  *** 2.0.8-0ubuntu1~16.04.2 500
>> 500 http://nl.archive.ubuntu.com/ubuntu xenial-updates/main
>> amd64 Packages
>> 100 /var/lib/dpkg/status
>>  2.0.7-0ubuntu1~16.04.2 500
>> 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64
>> Packages
>>  2.0.0-0ubuntu2 500
>> 500 http://nl.archive.ubuntu.com/ubuntu xenial/main amd64
>> Packages
>>
>> I could upgrade to the 3.0 packages, but that would be more challenging.
>>
>
>
> 2.1 is not 'LTS' version, so it's expected that it won't be available
> anymore. Personally I'd upgrade to 3.0. But backup /var/lib/lxd (when ALL
> containers are stopped) beforehand.
>
>
> Upgrading to 3.0 gives me two options: 1) xenial-backports, 2) the
> suggested PPA (ppa:ubuntu-lxc/stable).
> Which one would you pick?
>
>

 I'd go with xenial-backports. IIRC this is more-tested and recommended
compared to ppa (can't find the relevant email that point this off-hand
though, sorry).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to recover from ERROR state

2018-09-12 Thread Fajar A. Nugraha
On Wed, Sep 12, 2018 at 3:14 PM, Kees Bakker  wrote:

> On 11-09-18 21:56, Andrey Repin wrote:
> > Greetings, Kees Bakker!
> >
> >> ii  lxc-common 2.0.8-0ubuntu1~16.04.2  amd64Linux
> Containers userspace tools (common tools)
> >> ii  lxcfs  2.0.8-0ubuntu1~16.04.2  amd64FUSE based
> filesystem for LXC
> >> ii  lxd2.0.11-0ubuntu1~16.04.4 amd64Container
> hypervisor based on LXC - daemon
> >> ii  lxd-client 2.0.11-0ubuntu1~16.04.4 amd64Container
> hypervisor based on LXC - client
> > Upgrade from PPA.
> > add-apt-repository ppa:ubuntu-lxc/stable
> >
> > ii  lxc-common 2.1.1-0ubuntu1 amd64  Linux Containers
> userspace tools (common t
> > ii  lxc-templates  2.1.1-0ubuntu1 amd64  Linux Containers
> userspace tools (template
> > ii  lxc1   2.1.1-0ubuntu1 amd64  Linux Containers
> userspace tools
> > ii  lxcfs  2.0.8-1ubuntu2 amd64  FUSE based
> filesystem for LXC
> >
> >
>
> Hmm. That PPA does not have liblxc1 2.1.1, but 3.0.1
>
> # apt list --upgradable
> Listing... Done
> liblxc1/xenial 3.0.1-0ubuntu1~18.04.2~ubuntu16.04.1~ppa1 amd64
> [upgradable from: 2.0.8-0ubuntu1~16.04.2]
> libseccomp2/xenial 2.3.1-2.1ubuntu3~ubuntu16.04.1~ppa1 amd64 [upgradable
> from: 2.3.1-2.1ubuntu2~16.04.1]
> lxc-common/xenial 2.1.1-0ubuntu1~ubuntu16.04.1~ppa1 amd64 [upgradable
> from: 2.0.8-0ubuntu1~16.04.2]
> lxcfs/xenial 3.0.1-0ubuntu2~18.04.1~ubuntu16.04.1~ppa1 amd64 [upgradable
> from: 2.0.8-0ubuntu1~16.04.2]
>
> # apt policy liblxc1
> liblxc1:
>   Installed: 2.0.8-0ubuntu1~16.04.2
>   Candidate: 3.0.1-0ubuntu1~18.04.2~ubuntu16.04.1~ppa1
>   Version table:
>  3.0.1-0ubuntu1~18.04.2~ubuntu16.04.1~ppa1 500
> 500 http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu xenial/main
> amd64 Packages
>  3.0.1-0ubuntu1~16.04.2 100
> 100 http://nl.archive.ubuntu.com/ubuntu xenial-backports/main
> amd64 Packages
>  *** 2.0.8-0ubuntu1~16.04.2 500
> 500 http://nl.archive.ubuntu.com/ubuntu xenial-updates/main amd64
> Packages
> 100 /var/lib/dpkg/status
>  2.0.7-0ubuntu1~16.04.2 500
> 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64
> Packages
>  2.0.0-0ubuntu2 500
> 500 http://nl.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
>
> I could upgrade to the 3.0 packages, but that would be more challenging.
>


2.1 is not 'LTS' version, so it's expected that it won't be available
anymore. Personally I'd upgrade to 3.0. But backup /var/lib/lxd (when ALL
containers are stopped) beforehand.

FWIW, I'm more inclined to think your 'kworker' issue might be related to
btrfs instead of lxd, but it might be harder to debug that.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Where is stored the list of remote lxds ?

2018-08-25 Thread Fajar A. Nugraha
On Sat, Aug 25, 2018 at 5:40 PM, Pierre Couderc  wrote:

> Paying with lxd to understand it more (and because of a mysterious
> failure), I decide to reinit the whole lxd, as I delete the full
> /var/lib/lxd and excutes lxd init.
>
> So I am surprised that :
>
> lxd remote list
>
> finds me old remote lxds from the old installation...
>
> Are they stored elsewhere ?
>
>
clients store list of remotes (as well as client certificate for that user)
in ~/.config/lxc/

I believe lxd daemon store list of authorized client certs under
/var/lib/lxd/database

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to copy "manually" a container ?

2018-08-23 Thread Fajar A. Nugraha
On Thu, Aug 23, 2018 at 2:38 PM, Pierre Couderc  wrote:

> On 08/23/2018 09:24 AM, Fajar A. Nugraha wrote:
>
> On Thu, Aug 23, 2018 at 2:07 PM, Pierre Couderc  wrote:
>
>> On 08/23/2018 07:37 AM, Tamas Papp wrote:
>>
>>>
>>> On 08/23/2018 05:36 AM, Pierre Couderc wrote:
>>>
>>>> If for any reason, "lxc copy" does not work, is it enough to copy
>>>> (rsync) /var/lib/lxd/containers/ to another lxd on another computer in
>>>> /var/lib/lxd/containers/ ?
>>>>
>>>
>>> Copy the folder (watch out rsync flags) to 
>>> /var/lib/lxd/storage-pools/default/containers/,
>>> symlink to /var/lib/lxd/containers and run 'lxd import'.
>>>
>>> Thank you very much. It nearlu worked.
>> Anyway, it fails (in this case) because :
>> Error: The storage pool's "default" driver "dir" conflicts with the
>> driver "btrfs" recorded in the container's backup file
>>
>
> If you know how lxd use btrfs to create the container storage (using
> subvolume?), you can probably create it manually, and rsync there.
>
> Or you can create another storage pool, but backed by dir (e.g. 'lxc
> storage create pool2 dir') instead of btrfs/zfs.
>
> Or yet another way:
> - create a new container
> - take note where its storage is (e.g. by looking at mount options, "df
> -h", etc)
> - shutdown the container
> - replace the storage with the one you need to restore
>
> --
> Fajar
>
> Thank you, I think to that.
> But what is sure is that my "old" container is labelled as btrfs and after
> rsync on a "non btrfs" volume, the btrfs label remains
>


You can edit backup.yaml to reflect the changes. Here's an example on my
system:

-> my default pool is on zfs
# lxc storage show default
config:
  source: HD/lxd
  volatile.initial_source: HD/lxd
  zfs.pool_name: HD/lxd
description: ""
name: default
driver: zfs
...


-> create a test container
# lxc launch images:alpine/3.8 test1
Creating test1
Starting test1

# df -h | grep test1
HD/lxd/containers/test1  239G  5.2M  239G   1%
/var/lib/lxd/storage-pools/default/containers/test1


-> copy it manually to a "directory" with rsync, then "lxd import". As
expected, it doesn't work.
# mkdir /var/lib/lxd/storage-pools/default/containers/test2

# rsync -a /var/lib/lxd/storage-pools/default/containers/test1/.
/var/lib/lxd/storage-pools/default/containers/test2/.

# sed -i 's/name: test1/name: test2/g'
/var/lib/lxd/storage-pools/default/containers/test2/backup.yaml

# lxd import test2

# lxc start test2
Error: no such file or directory
Try `lxc info --show-log test2` for more info


-> cleanup before next test
# rm -rf /var/lib/lxd/storage-pools/default/containers/test2

# lxc delete test2


-> now create a zfs dataset properly, mount it, and THEN rsync (or replace
the whole thing with 'zfs send | zfs receive') + lxd import. works.
# zfs create -o
mountpoint=/var/lib/lxd/storage-pools/default/containers/test2
HD/lxd/containers/test2

# rsync -a /var/lib/lxd/storage-pools/default/containers/test1/.
/var/lib/lxd/storage-pools/default/containers/test2/.

# sed -i 's/name: test1/name: test2/g'
/var/lib/lxd/storage-pools/default/containers/test2/backup.yaml

# lxd import test2

# lxc start test2

# lxc list test2
+---+-+---+--++---+
| NAME  |  STATE  |   IPV4| IPV6 |TYPE| SNAPSHOTS |
+---+-+---+--++---+
| test2 | RUNNING | 10.0.3.122 (eth0) |  | PERSISTENT | 0 |
+---+-+---+--++---+


-> cleanup again
# lxc stop --force test2

# lxc delete test2


-> try again, this time using a different storage pool ('dir'). MUCH more
complicated, but possible
# lxc storage create testpool dir source=/tmp/testpool
Storage pool testpool created

# mkdir -p /tmp/testpool/containers/test2

# rsync -a /var/lib/lxd/storage-pools/default/containers/test1/.
/tmp/testpool/containers/test2/.

# sed -i 's/name: test1/name: test2/g'
/tmp/testpool/containers/test2/backup.yaml

# sed -i 's/pool: default/pool: testpool/g'
/tmp/testpool/containers/test2/backup.yaml

-> edit /tmp/testpool/containers/test2/backup.yaml manually
change device to this:
###
  devices:
eth0:
  nictype: bridged
  parent: lxdbr0
  type: nic
root:
  path: /
  pool: testpool
  type: disk
###

and change pool info to this
###
pool:
  config:
source: /tmp/testpool
volatile.initial_source: /tmp/testpool
  description: ""
  name: testpool
  driver: dir
  used_by: []
  status: Created
 

Re: [lxc-users] How to copy "manually" a container ?

2018-08-23 Thread Fajar A. Nugraha
On Thu, Aug 23, 2018 at 2:07 PM, Pierre Couderc  wrote:

> On 08/23/2018 07:37 AM, Tamas Papp wrote:
>
>>
>> On 08/23/2018 05:36 AM, Pierre Couderc wrote:
>>
>>> If for any reason, "lxc copy" does not work, is it enough to copy
>>> (rsync) /var/lib/lxd/containers/ to another lxd on another computer in
>>> /var/lib/lxd/containers/ ?
>>>
>>
>> Copy the folder (watch out rsync flags) to 
>> /var/lib/lxd/storage-pools/default/containers/,
>> symlink to /var/lib/lxd/containers and run 'lxd import'.
>>
>> Thank you very much. It nearlu worked.
> Anyway, it fails (in this case) because :
> Error: The storage pool's "default" driver "dir" conflicts with the driver
> "btrfs" recorded in the container's backup file
>

If you know how lxd use btrfs to create the container storage (using
subvolume?), you can probably create it manually, and rsync there.

Or you can create another storage pool, but backed by dir (e.g. 'lxc
storage create pool2 dir') instead of btrfs/zfs.

Or yet another way:
- create a new container
- take note where its storage is (e.g. by looking at mount options, "df
-h", etc)
- shutdown the container
- replace the storage with the one you need to restore

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bridged device's name

2018-08-20 Thread Fajar A. Nugraha
On Tue, Aug 21, 2018 at 8:40 AM, Mike Wright 
wrote:

> Hi all,
>
> Is there a way to set a network device's host side name?
>
> e.g. with lxc style configs:
>
> #myContainer
> lxc.net.0.type = veth
> lxc.net.0.veth.pair = host-side-name
> lxc.net.0.link = myBridge
>
>

Are you asking the equivalent of lxc.net.0.veth.pair in lxd?

Try:
devices:
  eth0:
host_name: host-side-name
name: eth0
nictype: bridged
parent: myBridge
type: nic

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers won't start under stretch-backport kernel reboot

2018-08-14 Thread Fajar A. Nugraha
On Wed, Aug 15, 2018 at 9:52 AM, Tony Lewis  wrote:

> Aug 15 11:40:50 server snap[6761]: lxd: error while loading shared
> libraries: liblxc.so.1: cannot open shared object file: No such file or
> directory
>
This is something to follow up

The library is present in what looks to be the right places in the snap
> directories, but not anywhere else:
>
> # find /snap -name liblxc.so.1 -print
> /snap/lxd/7651/lib/liblxc.so.1
> /snap/lxd/7792/lib/liblxc.so.1
> /snap/lxd/8011/lib/liblxc.so.1
>
>

How did you install lxd?

I followed https://stgraber.org/2016/12/07/running-snaps-in-lxd-containers/
and use "snap install lxd --edge". Perhaps you can try that, if not already?

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers won't start under stretch-backport kernel reboot

2018-08-14 Thread Fajar A. Nugraha
On Tue, Aug 14, 2018 at 1:54 PM, Tony Lewis  wrote:

> Apologies in advance for the bump, but does anyone have an insights on
> this?
>
>
Did you install lxd before using source instead of snap?

=> lxd.service loaded active exitedLSB: Container hypervisor based on
LXC

You shouldn't have that when installing lxd from snap. If yes, I suggest
cleaning out all traces of it (e.g. in /bin, /usr, /usr/local/bin) to avoid
potential confusion.


root  2452  1823  5 11:42 ?00:00:08 lxd --logfile
>> /var/snap/lxd/common/lxd/logs/lxd.log --group lxd
>>
>
What does /var/snap/lxd/common/lxd/logs/lxd.log say? Does it have any error?

My GUESS is that you have /usr/bin/lxd and /snap/bin/lxd, which interfere
with each other. If that's not it, then my next guess is that there's
probably some group issue, like
https://github.com/lxc/lxd/issues/1861#issuecomment-206507631 . In any case
lxd.log might have more info.

Another way to make sure the issue is reproducible is if you can try
similar setup (snap-lxd + stretch backport kernel) in a fresh environment
(e.g. in a VM).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-11 Thread Fajar A. Nugraha
On Sat, Aug 11, 2018 at 10:05 PM, Fajar A. Nugraha  wrote:

> On Sat, Aug 11, 2018 at 1:37 PM, Pierre Couderc  wrote:
>
>> Trying to build lxd from sources, I get a message about sqlite3 missing,
>> and an invite to "make deps".
>>
>> But it fails too with :
>>
>>
>> No package 'sqlite3' found
>>
>> Consider adjusting the PKG_CONFIG_PATH environment variable if you
>> installed software in a non-standard prefix.
>>
>> Alternatively, you may set the environment variables sqlite_CFLAGS
>> and sqlite_LIBS to avoid the need to call pkg-config.
>> See the pkg-config man page for more details
>>
>>
>> And the man pkg-config is not clear to me...
>>
>>
> I believe the script basically needs this file:
> /usr/lib/pkgconfig/sqlite.pc
>
> On ubuntu 18.04, it's provided by libsqlite0-dev.
>
> If you install sqlite manually from source (e.g. you have it on
> /usr/local/lib/pkgconfig/sqlite.pc), you could probably do something like
> "export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig" (or wherever the file
> is located) before building lxc.
>
>

Sorry, it should be "/usr/lib/x86_64-linux-gnu/pkgconfig/sqlite3.pc" and
"libsqlite3-dev" on Ubuntu 18.04

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-11 Thread Fajar A. Nugraha
On Sat, Aug 11, 2018 at 1:37 PM, Pierre Couderc  wrote:

> Trying to build lxd from sources, I get a message about sqlite3 missing,
> and an invite to "make deps".
>
> But it fails too with :
>
>
> No package 'sqlite3' found
>
> Consider adjusting the PKG_CONFIG_PATH environment variable if you
> installed software in a non-standard prefix.
>
> Alternatively, you may set the environment variables sqlite_CFLAGS
> and sqlite_LIBS to avoid the need to call pkg-config.
> See the pkg-config man page for more details
>
>
> And the man pkg-config is not clear to me...
>
>
I believe the script basically needs this file:
/usr/lib/pkgconfig/sqlite.pc

On ubuntu 18.04, it's provided by libsqlite0-dev.

If you install sqlite manually from source (e.g. you have it on
/usr/local/lib/pkgconfig/sqlite.pc), you could probably do something like
"export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig" (or wherever the file is
located) before building lxc.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container and Systemd

2018-08-10 Thread Fajar A. Nugraha
On Fri, Aug 10, 2018 at 5:12 PM, Goran  wrote:

> Your test-asuser.service works as intended. If I change the user and
> group to grafana it shows the same problems.
>
> # id grafana
> uid=207(grafana) gid=207(grafana) groups=207(grafana)
>
> # cat /etc/passwd
> ...
> grafana:x:207:207::/var/lib/grafana:/sbin/nologin
> ...
>
> cat /etc/group
> ...
> grafana:x:207:
> ...
>
>
Hmm ... not sure what happens. Probably /var/lib/grafana doesn't exist?

Unless you got help from somebody more familiar with systemd, I'd just do
one of these:
- start as root in the systemd unit, then use runasuser/su (this already
works), or
- delete (preserving its files), recreate the user (possibly using "useradd
-m" to create a "normal" user with /home directory, just in case), and
chown grafana's files afterwards.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container and Systemd

2018-08-10 Thread Fajar A. Nugraha
On Fri, Aug 10, 2018 at 4:38 PM, Goran  wrote:

> Always the same behavior, if the binary is executed as a user
> different from root, systemd does not find the binary.
>
> In this case
>
> # which bash
> /usr/bin/bash
>
> can't be found by systemd. ExecStart=whatsoever does not work. It
> doesn't matter if it's bash or grafana-server as long the user differs
> from root.
>
>

This test unit works fine for me in lxd. Container created using "lxc
launch images:archlinux arch-test"

##
# cat /etc/systemd/system/test-asuser.service
[Unit]
Description=run as user test

[Service]
User=nobody
Group=nobody

Type=oneshot
ExecStart=/bin/bash -c 'echo $(date) id is $(id) >> /tmp/test-asuser.log'
##

How did you create your container? If you use lxc (not lxd), try the
'download' template.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD move container to another pool ?

2018-08-09 Thread Fajar A. Nugraha
On Thu, Aug 9, 2018 at 7:57 PM, Pierre Couderc  wrote:

>
> On 08/09/2018 11:30 AM, Fajar A. Nugraha wrote:
>
>
> Basically you'd just need to copy /var/lib/lxd and whatever storage
> backend you use (I use zfs), and then copy them back later. Since I also
> put /var/lib/lxd on zfs (this is a custom setup), I simply need to
> export-import my pool.
>
>
> /var/lib/lxc alone, nothing about /var/lxc ?
>
>
>
Are you using lxc1 (e.g. lxc-create commands) or lxd?

When lxd is installed as package (e.g. installed as apt on ubuntu), you
only need /var/lib/lxd and its storage pool (which will be mounted
on /var/lib/lxd/storage-pools/...).

Here's what I'm using:
- I start AWS spot instance
- I have a custom ubuntu template, with lxd installed but not started. It
thus has an empty /var/lib/lxd,  with no storage pools and network.
- I have a separate EBS disk, used by a zfs pool 'data'. I then have
'data/lib/lxd' which I mount as '/var/lib/lxd', and 'data/lxd' which is
registered as lxd storage pool 'default'.
- I create containers (using that default pool)
- if that spot instance is terminated (thus the "root"/OS disk is lost), I
can simply create a new spot instance again, and attach the 'data' pool
there. I will then have access to all my containers.

Is that similar to what you need?

Note that lxc1 and lxd from snap uses different directories than lxd from
package.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container and Systemd

2018-08-09 Thread Fajar A. Nugraha
On Thu, Aug 9, 2018 at 8:11 PM, Goran  wrote:

> I did as you told. What I can say is that the user/group directive are
> the problem.
>
> With this config it works:
>
>

Now we're getting somehwere :D



> [Unit]
> Description=Grafana service
> After=network.target
>
> [Service]
> # User=grafana
> # Group=grafana
> # WorkingDirectory=/usr/share/grafana
> # ExecStart=/usr/bin/grafana-server -config=/etc/grafana.ini
> ExecStart=/usr/bin/runuser -s /bin/bash -g grafana -l grafana -c
> 'grafana-server -config=/etc/grafana.ini -homepath /usr/share/grafana'
> LimitNOFILE=1
> TimeoutStopSec=20
> SuccessExitStatus=0 2
>
> [Install]
> WantedBy=multi-user.target
>
> What I don't understand is why the user/group directive are not
> accepted and quitted with error
>
> Aug 09 13:06:10 monitor systemd[25843]: grafana.service: Failed to
> determine user credentials: No such process
> Aug 09 13:06:10 monitor systemd[25843]: grafana.service: Failed at
> step USER spawning /usr/bin/runuser: No such process
>
>

This looks promising: "/usr/bin/runuser: No such process"

Try something like this:

[Service]
User=grafana
Group=grafana
ExecStart=/bin/bash -c 'grafana-server -config=/etc/grafana.ini -homepath
/usr/share/grafana'

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD move container to another pool ?

2018-08-09 Thread Fajar A. Nugraha
On Thu, Aug 9, 2018 at 4:11 PM, Pierre Couderc  wrote:

> I want to "format"  my LXD  computer   :
>
> So I would like to  :
>
> - create a LXD storage on an auxiliary (btrfs) disk, something like : lxd
> storage create mytmp btrfs source=/mnt/my_btrfs_unit/lxd_subvolume (is
> this possible ?)
>
> - move my contianer to this new storage . How ?
>
> - fulley reinstall my server and LXD
>
> - "attach" my /mnt/my_btrfs_unit/lxd_subvolume to new LXD (how ?)
>
> - move back my container
>
> - detach and remove my tmp subvolume
>
>
> If it was on another computer it would be a simple "move"...
>
>
So basically you just want to backup and restore the complete lxd setup?
What are you currently using?

Basically you'd just need to copy /var/lib/lxd and whatever storage backend
you use (I use zfs), and then copy them back later. Since I also put
/var/lib/lxd on zfs (this is a custom setup), I simply need to
export-import my pool.

If you currently use the default zfs-on-loopback backend, you simply need
to copy /var/lib/lxd and the loopback file (I don't remember the name
offhand). If you use btrfs, then the easiest way is to detach the disk
before formatting the pool (plus copy /var/lib/lxd, obviously).

"Moving" a container to a new storage is, AFAIK, a more complicated
process. The only way I know of is basically create a container on that new
pool, and overwrite the content of its rootfs. I don't recommend this for
your particular needs.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container and Systemd

2018-08-09 Thread Fajar A. Nugraha
On Tue, Aug 7, 2018 at 11:13 PM, Goran  wrote:

> I'm starting Grafana on top of Arch Linux without problem. But when I
> install Grafana into an Arch Linux LXC container on top of Arch Linux
> OS I can't start it.
>
> The error is:
>
> systemd[24509]: grafana.service: Failed to determine user credentials:
> No such process
> systemd[24509]: grafana.service: Failed at step USER spawning
> /usr/bin/grafana-server: No such process
>
> It looks like a systemd error but it's working on top of a real OS.
>
> Here is my grafana.service file: https://pastebin.com/T8XU98XT
>
> I can start Grafana without any problems with
>
> runuser -s /bin/bash -g grafana -l grafana -c 'grafana-server
> -config=/etc/grafana.ini -homepath /usr/share/grafana'
>
>
To confirm: you can start it by logging into the container and run the
above command?



> It looks like LXC is hindering systemd to start the process.
>
>

I think the easiest way to troubleshoot is just use that command in a
systemd unit. Something like

[Unit]
Description=Simple run test

[Service]
ExecStart=/bin/bash -c "runuser -s /bin/bash -g grafana -l grafana -c
'grafana-server -config=/etc/grafana.ini -homepath /usr/share/grafana'"

[Install]
WantedBy=multi-user.target


Put it as a some service file somewhere on /etc/systemd/system, start it,
and see what happens. If that works, you can start changing up the service
to look more like the original while finding out which lines from the
original service file is problematic.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Convert virtual machine to LXC container

2018-08-08 Thread Fajar A. Nugraha
Sorry, my mistake. I meant "lxc-console". And I just rechecked, apparently
there's the equivalent "lxc console" command as well.

Ignore my comment about "lxc-attach" earlier. You should be able to use lxd
rootfs for lxc as long as:
- you have the correct uid mapping (it's simpler if you just use privileged
container. Otherwise just setup uid map on lxc config file manually)
- you have a suitable lxc config file (the easiest way is probably to
create a new lxc container, then replace the original rootfs with the one
from rsync/lxd-p2c)

-- 
Fajar


On Thu, Aug 9, 2018 at 10:59 AM, Saint Michael  wrote:

> LXD does not support lxc-attach?
> I thought that LXD was a superset of LXC, that added on top of it.
> Maybe somebody care to explain how LXC and LXD compare.
>
>
> On Wed, Aug 8, 2018 at 11:21 PM Fajar A. Nugraha  wrote:
>
>> I've converted (manually) some lxc containers to lxd and back in the
>> past. IIRC the biggest difference was that lxd does not need to output
>> anything to console, while lxc needs it (e.g. for lxc-attach). Depending on
>> what container distro and version you use, it might not matter (e.g. it
>> should "just work" for newer ubuntu), and you can use the same container
>> rootfs for both.
>>
>> If you use anything else and simple 'rsync --numeric-ids)' doesn't work,
>> take a look at the customizations done by lxc-template to find out what
>> else need to be adjusted, e.g. https://github.com/lxc/
>> lxc-templates/blob/master/templates/lxc-debian.in#L67 for old debian
>> system that still use sysvinit.
>>
>> --
>> Fajar
>>
>> On Thu, Aug 9, 2018 at 9:18 AM, Saint Michael  wrote:
>>
>>> The question is how can I use that for plan LXC.
>>> I can install a box with LXD, bring the computer in, but then I want a
>>> plain LXC container.
>>> Is it doable?
>>>
>>> On Wed, Aug 8, 2018 at 7:17 PM David Favor  wrote:
>>>
>>>>   wrote:
>>>> > Has anybody invented a procedure, a script, etc., to convert a
>>>> running
>>>> > machine to a LXC container? I was thinking to create a container of
>>>> the
>>>> > same OS, and then use rsync, excluding /proc /tmp/ /sys etc.  Any
>>>> ideas?
>>>>
>>>> Use the fabulous lxd-p2c script.
>>>> ___
>>>> lxc-users mailing list
>>>> lxc-users@lists.linuxcontainers.org
>>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>>
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Convert virtual machine to LXC container

2018-08-08 Thread Fajar A. Nugraha
I've converted (manually) some lxc containers to lxd and back in the past.
IIRC the biggest difference was that lxd does not need to output anything
to console, while lxc needs it (e.g. for lxc-attach). Depending on what
container distro and version you use, it might not matter (e.g. it should
"just work" for newer ubuntu), and you can use the same container rootfs
for both.

If you use anything else and simple 'rsync --numeric-ids)' doesn't work,
take a look at the customizations done by lxc-template to find out what
else need to be adjusted, e.g.
https://github.com/lxc/lxc-templates/blob/master/templates/lxc-debian.in#L67
for old debian system that still use sysvinit.

-- 
Fajar

On Thu, Aug 9, 2018 at 9:18 AM, Saint Michael  wrote:

> The question is how can I use that for plan LXC.
> I can install a box with LXD, bring the computer in, but then I want a
> plain LXC container.
> Is it doable?
>
> On Wed, Aug 8, 2018 at 7:17 PM David Favor  wrote:
>
>>   wrote:
>> > Has anybody invented a procedure, a script, etc., to convert a running
>> > machine to a LXC container? I was thinking to create a container of the
>> > same OS, and then use rsync, excluding /proc /tmp/ /sys etc.  Any ideas?
>>
>> Use the fabulous lxd-p2c script.
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] unprivileged containers and databases

2018-08-07 Thread Fajar A. Nugraha
Which distro, and how did you install it?

Using ubuntu 18.04.1 host, bundled lxd 3.0.1-0ubuntu1~18.04.1, I was able
to start mariadb (10.1 bundled in ubuntu, as well as 10.2 and 10.3 from
https://downloads.mariadb.org/mariadb/repositories/) in unpriv lxd
container just fine.

IIRC on openvpn its systemd service was causing problem (privatetmp or
something? can't remember the details now), so you might want to poke
around /lib/systemd/system/mariadb.service in case something similar is
causing a problem here. 'systemctl status mariadb' and /var/log/syslog
might also help provide some hints.

Another thing you might be able to try, is install it as priv container,
but then convert it to unpriv afterwards. This might help as a workaround
if whatever version of lxc/lxd you're using is having problems with
mariadb's post install script.

-- 
Fajar


On Wed, Aug 8, 2018 at 4:49 AM, Saint Michael  wrote:

> same here, and it failed to install on a unprivileged container. Mariadb
> 10.2
>
>
> On Tue, Aug 7, 2018 at 2:36 PM Szalewicz Patrick 
> wrote:
>
>> I installed MariaDB a few times in a lxc Container
>>
>> What OS are you using?
>>
>> I used Ubuntu and Debian in the containers
>>
>>
>>
>>
>> On Tue, Aug 7, 2018 at 8:18 PM +0200, "Serge E. Hallyn" > > wrote:
>>
>> Quoting Saint Michael (vene...@gmail.com):
>>> > The default unprivileged 3.0 container does not allow fort the 
>>> > installation
>>> > on MariaDB, so we are forced to make into a privileged one. In my opinion
>>> > this involves a huge risk, since we need to install MySQL or Mariadb in
>>> > almost every conceivable container. LXC should allow to install at least
>>> > the most popular database in the planet. Please don't believe me, just try
>>> > it. The installation never finishes because there is an error at the end,
>>> > and the error is related a some cap missing.
>>>
>>> Can you show the specific error?
>>>
>>>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the state of the art for lxd and wifi ?

2018-07-23 Thread Fajar A. Nugraha
On Mon, Jul 23, 2018 at 5:33 PM, Pierre Couderc  wrote:

>
> On 07/23/2018 12:12 PM, Fajar A. Nugraha wrote:
>
> Relevant to all VM-like in general (including lxd, kvm and virtualbox):
> - with the default bridged setup (on lxd this is lxdbr0), VMs/containers
> can access internet
> (...)
> - bridges (including macvlan) does not work on wifi
>
>
> Sorry, it is not clear for me how default bridges "can access internet",
> if simultaneously "bridges (including macvlan) does not work on wifi" ?
>
>

My bad for not being clear :)

I meant, the default setup uses bridge + NAT (i.e. lxdbr0). The NAT is
automatically setup by LXD. That works. If your PC can access the internet,
then anything on your container (e.g. wget, firefox, etc) can access the
internet as well.


Bridge setups WITHOUT nat (those that bridge containers interface directly
to your host interface, e.g. eth0 or wlan), on the other hand, will only
work for wired, and will not work for wireless.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the state of the art for lxd and wifi ?

2018-07-23 Thread Fajar A. Nugraha
On Mon, Jul 23, 2018 at 5:08 PM, Pierre Couderc  wrote:

> Where can I find a howto for lxd on a an ultramobile with wifi only ?
>
> I find some posts aged 2014 and more modern posts saying it is not
> possible with wifi.
>
> I want to install many containers accessing internet, or being acessed
> from internet.
>
>
>
>

Relevant to all VM-like in general (including lxd, kvm and virtualbox):
- with the default bridged setup (on lxd this is lxdbr0), VMs/containers
can access internet
- to make it accessible FROM internet, you need (the easy way) to setup
port forwarding/NAT (e.g. using iptables on the host)
- bridges (including macvlan) does not work on wifi

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nat networking with fixed (dhcp ) IP addresses

2018-05-25 Thread Fajar A. Nugraha
On Fri, May 25, 2018 at 3:25 PM, Michel Jansens
 wrote:
> Thanks Fajar it works!
>
> What I did:
>
> #lets create a new profile
> lxc profile  copy default nonet
>
> #remove network from the profile
> lxc profile  device remove nonet eth0
>
> #create the ’testip' container
> lxc init ubuntu:18.04 testip --profile nonet
>
> #attach the network device to it, with IP address
> lxc config  device add testip eth0 nic nictype=bridged parent=lxdbr0
> host_name=testip ipv4.address=10.0.3.203


One note from me, you could actually use the default profile and
override eth0 directly in the config file.
At least it works with "lxc config edit", didn't try with "lxc config device".

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nat networking with fixed (dhcp ) IP addresses

2018-05-24 Thread Fajar A. Nugraha
On Thu, May 24, 2018 at 11:56 PM, Michel Jansens
 wrote:
>
> Hi all,
>
> I’m running lxd 3.0.0 on Ubuntu 18.04 and I would like to use NAT/bridge
> networking for most containers, but with the possibility to fix the IP
> address myself (to ease some cluster config in Ansible).
>
> I’ve tried many things and read a lot around, but didn’t find anything that
> works.
>
> I tried "lxc config device set mycontainer eth0 ipv4.address 10.25.240.139”
> it returns: Error: The device doesn't exist  (source:
> https://blog.ubuntu.com/2017/02/14/network-management-with-lxd-2-3)
>
> The lxdbr0 bridged network is inherited by the container from the ‘default
> profile’ and lxdbr0 config is:


One way that works is to specify eth0 in the container config file,
overriding the profile.
While you're at it you might wan to set host_name as well, so that the
veth device name on the host side stays the same.

Example:
devices:
  eth0:
host_name: c1-0
ipv4.address: 10.0.3.203
name: eth0
nictype: bridged
parent: lxdbr0
type: nic

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd 3.0.0: What is a "managed" network? Only managed networks can be modified.

2018-05-06 Thread Fajar A. Nugraha
On Mon, May 7, 2018 at 2:10 AM, Gaétan QUENTIN  wrote:
>
> lxc network list
> +-+--+-+-+-+
> |  NAME   |   TYPE   | MANAGED | DESCRIPTION | USED BY |
> +-+--+-+-+-+
> +-+--+-+-+-+
> | lxdbr0  | bridge   | NO  | | 71  |
>
>
> what does mean managed network?
>

Someting lxd creates, i.e. during 'lxd init' or with 'lxc network create'

>
> and why can i modify bridge.driver?
>
>
> lxc network set lxdbr0 bridge.driver openvswitch
> Error: Only managed networks can be modified.

How did you install lxd? My guess is that an external program creates
it (e.g. you create it on /etc/network/interfaces, or leftover service
from an old lxd version)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Fajar A. Nugraha
On Thu, May 3, 2018 at 8:09 PM, David Favor  wrote:
> This is tricky... Netplan forced abuse is similar to systemd... No one
> likes systemd + it works abysmally + it was crammed down everyone's
> throat.
>
> It appears Netplan will be the same.
>
> Eventually some update will likely wipe out old networking + force upgrade
> to Netplan.

From what I can tell so far, netplan is similar to network-manager, in
the sense that both can manage network, and both can be uninstalled
just fine (obviously with some functionality loss, but perfectly fine
for minimal server install running zfs + lxd). It was that way in
16.04 (the network-manager part, that is), and it's that way currently
in 18.04.

I find it hard to see ubuntu breaking that functionality on LTS
release. On the next releases, perhaps.

Of course, if you have a reference that says otherwise, do share the link.


>>> LXD via SNAP (which is only LXD install option on Bionic).
>>
>>
>> Not true. It's not the ONLY option.
>>
>> # apt policy lxd
>> lxd:
>>   Installed: 3.0.0-0ubuntu4
>>   Candidate: 3.0.0-0ubuntu4
>>   Version table:
>>  *** 3.0.0-0ubuntu4 500
>> 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
>> 100 /var/lib/dpkg/status
>>
>
> Currently APT packages are being maintained for backwards compatibility.
>
> And be aware. The APT packages no longer receive updates, so for example
> the patches produced this week fixing many LXD bugs will only be available
> to you, if you switch to SNAP.
>
> LXD 3.0 initial (no patches) is the last APT supported LXD release.
>
> This is covered somewhere on the LXD site.

Is there a link?

I know of the PPA deprecation (not ubuntu official repository, but the
ppa), i.e. 
https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg07938.html

https://linuxcontainers.org/lxd/getting-started-cli/ says apt with official repo
https://help.ubuntu.com/lts/serverguide/lxd.html also says apt
(although to be fair, the page hierarcy starts with 'ubuntu 18.04',
but the page content still has 16.04)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Fajar A. Nugraha
On Thu, May 3, 2018 at 7:57 PM, Tomasz Chmielewski  wrote:
> Indeed, I can confirm it's some netplan-related issue with
> /etc/netplan/10-lxc.yaml.
>
> Working version for bionic containers set up before 2018-May-02:
>
> network:
>   ethernets:
> eth0: {dhcp4: true}
>   version: 2
>
>
>
> Broken version for bionic containers set up after 2018-May-02:
>
> network:
>   ethernets:
> eth0: {dhcp4: true}
> version: 2
>
>
> Please note that the broken one has no indentation (two spaces) before
> "version: 2", this is the only thing that differs and which breaks DHCPv4.

Ah, sorry, I was not thorough enough when comparing my resulting
/etc/netplan/10-lxc.yaml. It looks like this now:

# cat /etc/netplan/10-lxc.yaml
network:
  version: 2
  ethernets:
eth0: {dhcp4: true}

So the new image update apparently fixed the bug.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Fajar A. Nugraha
On Thu, May 3, 2018 at 1:28 PM, Kees Bos  wrote:
> On Thu, 2018-05-03 at 08:09 +0200, Kees Bos wrote:
>> On Thu, 2018-05-03 at 12:58 +0900, Tomasz Chmielewski wrote:
>> >
>> > Reproducing is easy:
>> >
>> > # lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
>> >
>> >
>> > Then wait a few secs until it starts - "lxc list" will show it has
>> > IPv6
>> > address (if your bridge was configured to provide IPv6), but not
>> > IPv4
>> > (and you can confirm by doing "lxc shell", too):
>> >
>> > # lxc list
>> >
>> >
>>
>> I can confirm this. Seeing the same issue.
>
> BTW. It's the /etc/netplan/10-lxc.yaml
>
> Not working (current) version:
> network:
>   ethernets:
> eth0: {dhcp4: true}
> version: 2
>
>
> Working version (for me):
> network:
>   version: 2
>   ethernets:
> eth0:
>   dhcp4: true


Works for me. Both with images:ubuntu/bionic (which has
/etc/netplan/10-lxc.yaml, identical to your 'not working' one) and
ubuntu:bionic (which has /etc/netplan/50-cloud-init.yaml).

Then again the images:ubuntu/bionic one has '20180503_11:06' in its
description, so it's possible that the bug was fixed recently.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Fajar A. Nugraha
On Thu, May 3, 2018 at 10:14 AM, David Favor  wrote:
> Be aware there is a bug in Bionic packaging, so if you upgrade
> machine level OS from any previous OS version to Bionic, LXD
> networking becomes broken... so badly... no Ubuntu or LXD developer
> has figured out a fix.

Wait, what?

I've upgraded three physical machines (and a custom AWS AMI) from
16.04 (somewhat minimal install, with lxd) to 18.04. All have lxdbr0
working fine. Of course that also means I don't have netplan installed
(since 16.04 doesn't have it, and the upgrade process doesn't install
it), which is perfect for me. I like old fashioned
/etc/network/interfaces.d/*.cfg.


Not sure about 17.04/17.10 to 18.04 though.

> LXD via SNAP (which is only LXD install option on Bionic).

Not true. It's not the ONLY option.

# apt policy lxd
lxd:
  Installed: 3.0.0-0ubuntu4
  Candidate: 3.0.0-0ubuntu4
  Version table:
 *** 3.0.0-0ubuntu4 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] authentication in containers jacked-up!

2018-04-09 Thread Fajar A. Nugraha
On Thu, Mar 29, 2018 at 8:22 PM, Ray Jender  wrote:
> ray@ container 2:/etc$  sudo visudo
>
> sudo: no tty present and no askpass program specified
>

Try 'sudo -S visudo'

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] dynamic cgroup memory limit

2018-03-31 Thread Fajar A. Nugraha
On Sat, Mar 31, 2018 at 9:43 PM, Tian-Jian Wu  wrote:
> We are developers of project clondroid (https://github.com/clondroid)
> Our android porting lxc tools are at
> https://github.com/clondroid/lxc-for-Android-7.1.2.
> This command 'lxc config set my-container limits.memory 256MB' , the 'lxc'
> command seems be part of LXD project.
> If so, I guess it's written in GOLANG.  Wondering if there is a chance to
> port LXD to android...


So you're using lxc1, not lxd? Try
https://linuxcontainers.org/lxc/manpages/man1/lxc-cgroup.1.html

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Limit network bandwidth to LXC containers

2018-03-14 Thread Fajar A. Nugraha
On Thu, Mar 15, 2018 at 3:06 AM, Angel Lopez  wrote:
> Hi,
>
> I need to limit the network bandwidth available to each LXC container using
> cgroup's net_cls.classid feature. Each LXC container would have its own
> classid value in such a way that all packets from containers would be tagged
> with the classid and afterwards classified in the correct host configured
> traffic class where the bandwidht limit applies.
>
> To achieve this, I followed these steps:
>
> 1. Configure traffic control:
>
> # tc qdisc del dev eno54 root

Asking the obvious, have you used tc (directly, not via wrapper) in
another setup (e.g. VMs, physical server) where it successfully works
as expected?



> Expected behaviour: iperf running on container lxctest1 being limited to 10
> Mbps and iperf running on lxctest2 container being limited to 50 Mbps.
> What I get: both iperf running unconstrained at maximum speed.


What I've tested and works, is use fireqos
(https://github.com/firehol/firehol/wiki/FireQOS-Tutorial). One of the
things that might make it different compared to using tc directly is
the presence of ifb interfaces.

Be careful with 'upload' and 'download', it might be reversed in your setup.

In my case I use IPs to limit BW. In your case it might be easier to
use persistent veth names on host side instead (or, as the wiki
mentioned, iptables' classify and mark targets).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] container root unable to setcap in container

2018-03-09 Thread Fajar A. Nugraha
On Fri, Mar 9, 2018 at 5:09 PM, Michael Johnson
 wrote:
> Hi All!
>
> I have noticed that a container's root user is unable to modify the
> capabilities of a root-owned file in the container.
>
> For example:
> setcap cap_net_raw=ep /bin/ping
> returns:
> Failed to set capabilities on file `ping' (Operation not permitted)

Probably https://github.com/lxc/lxd/issues/2507#issuecomment-254058349

> It is possible to set this capability as root from the host, operating
> on the container's file.
>
> Can someone please explain this behavior? What am I doing wrong? When is
> root in the container not root in the container?
>

If you use lxd, the default is unprivileged. "fake" root.

> This is on gentoo. Have I overlooked an obscure kernel config?

AFAIK some distros could detect whether setcap is possible, and if
not, fallback using suid.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Upgrading host and containers : in which order ?

2018-03-08 Thread Fajar A. Nugraha
On Thu, Mar 8, 2018 at 5:56 PM, phep  wrote:
> Hi,
>
> Pretty much every thing's in the subject line : we've got a host running
> Debian Jessie and LXC 1.0 with a handful of containers in the same Debian
> version that we all need to upgrade to Debian Stretch with LXC 2.0. By the
> way, hosts and containers are using systemd as init system, if this matters.
>
> I'm wondering which migration route I should take : migrate host first or
> containers first ?


Host and container are generally decoupled (with the exception of some
cases, like when host is very old and does not use systemd and cgroup,
which in not applicable in this case). So you can do either one.

I would upgrade the host first though.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC copy snapshots only to remote?

2018-03-07 Thread Fajar A. Nugraha
On Thu, Mar 8, 2018 at 1:49 AM, Lai Wei-Hwa  wrote:

> Thanks Fajar,
>
> I'm more interested in if I'm right or wrong and why that's the case.
>
> Incremental snapshot support is in LXD 3.0 but I'm asking in relation to
> LXC, not LXD. And I'm really looking to clear up my (mis)understanding.
>
>

Ah, I must not have the most recent info then.

Regardles when (or if) the devs decided to implement it in lxc (even
support in lxd seems not completed yet:
https://github.com/lxc/lxd/issues/3326), you could always perform
storage-level backup yourself. Particularly handy if you have lots of
containers, use zfs, and use recursive incremental send.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC copy snapshots only to remote?

2018-03-07 Thread Fajar A. Nugraha
On Thu, Mar 8, 2018 at 12:03 AM, Lai Wei-Hwa  wrote:

> Hi Everyone,
>
> I'm probably not fully grasping how LXC containers/snapshotting works, but
> why isn't the following possible?
>
> *Host* *Container*
> *Monday*
> *Tuesday*
> *Wed*
> *Thurs*
> H1
> C1 (fresh Ubuntu)
> SA (added apache)
> SB (removed apache and added nginx)
> SC (hardened nginx config)
> Host dies
> H2
> C2 (created from C1 Snapshot B)
>
>
> SC (with hardened nginx config)
>
>
> On Tuesday, I snapshot C1 (creating SB) and stop the container. I then
> jump on a new host (H2) and copy snapshot B:
>
> *H2$  * lxc copy H1:C1/SB C2
>
> At this point, my C2 is equivalent to C1 + SA + SB. Thus, I believe that
> C1 + SA + SB = C2
>
> On Wednesday, I take Snapshot C on H1.
>
> I believe that on Wednesday, after taking SC, I should be able to copy SC
> *alone* to H2. And then on Thursday, when H1 dies, I should be able to go
> to H2 and launch SC (C2 + SC) and have the same container I had on H! when
> I first took Snapshot C.
>
> If I'm wrong, why am I wrong? If I'm right, how do I copy SC by itself
> (and not the whole container) to H2 on Wednesday?
>
>
>

I'm pretty sure lxc doesn't do incremental snapshots.

To get what you want, you need to manage the storage snapshots (and
incremental send) yourself. For example, if using zfs, you can use sanoid +
syncoid.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container isolation with iptables?

2018-03-04 Thread Fajar A. Nugraha
On Sun, Mar 4, 2018 at 5:27 PM, Marat Khalili  wrote:
> On 04/03/18 02:26, Steven Spencer wrote:
>
> Honestly, unless I'm spinning up a container on my local desktop, I always
> use the routed method. Because our organization always thinks of a container
> as a separate machine, it makes the build pretty similar whether the machine
> is on the LAN or WAN side of the network. It does, of course, require that
> each container run its own firewall, but that's what we would do with any
> machine on our network.
>
> Can you please elaborate on your setup?It always seemed like administrative
> hassle to me. Outside routers need to known how to find your container. I
> can see three ways, each has it's drawbacks:
>
> 1. Broadcast container MACs outside, but L3-route packets inside the server
> instead of L2-bridging. Seems clean but I don't know how to do it in [bare]
> Linux.


Here's one way to do it, with manual networking setup in lxd (making
this automated and converting this to lxc is left as an exercise for
readers. I don't use lxc anymore).


Environment:
- host eth0 is 10.0.3.117/24 with router on 10.0.3.1 (this is actually
an lxd container with nesting enabled, which should behave like a
baremetal lxd host for this purpose)
- guest container name is 'c1' (which is a nested container in this case)
- host will use proxyarp to broadcast c1's MAC
- c1 will use routed setup using veth and p2p ip
- c1 will see a network interface called 'c-c1' instead of 'eth0'
- c1 will use 10.0.3.201
- host side of veth pair will be called 'h-c1', and use 10.0.0.1 (can
be any unused IP in your network, can be used multiple times on
different veths)


Setup in host:
### start with "c1" stopped
### enable proxyarp and ip forwarding
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
echo 1 > /proc/sys/net/ipv4/ip_forward

### create veth pair
ip link add dev h-c1 type veth peer name c-c1

### setup veth pair on host side
ip ad add 10.0.0.1 dev h-c1 peer 10.0.3.201 scope link
ip link set dev h-c1 up

### configure c1 to use the created veth pair. "lxc config edit c1",
then add these lines in "device" section.
### use "eth0" as section name so that it replace "eth0" inherited
from the profile
devices:
  eth0:
name: c-c1
nictype: physical
parent: c-c1
type: nic

### start the container
lxc start c1



Setup in c1:
### setup veth pair
ip ad add 10.0.3.201 peer 10.0.0.1 dev c-c1
ip link set dev c-c1 up
ip r add default via 10.0.0.1

### test connectivity with router
ping -n -c 1 10.0.3.1

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC container isolation with iptables?

2018-02-27 Thread Fajar A. Nugraha
On Wed, Feb 28, 2018 at 12:21 AM, bkw - lxc-user
 wrote:
> I have an LXC host.  On that host, there are several unprivileged
> containers.  All containers and the host are on the same subnet, shared via
> bridge interface br0.
>
> If container A (IP address 192.168.1.4) is listening on port 80, can I put
> an iptables rule in place on the LXC host machine, that would prevent
> container B (IP address 192.168.1.5) from having access to container A on
> port 80?
>
> I've tried this set of rules on the LXC host, but they don't work:
>
> iptables -P INPUT DROP
> iptables -P FORWARD DROP
> iptables -P OUTPUT ACCEPT
> iptables -A FORWARD -j DROP -s 192.168.1.5 -d 192.168.1.4
>
> Container B still has access to container A's port 80.


That's how generic bridges work.

Some possible ways to achieve what you want:
- don't use bridge. Use routed method. IIRC this is possible in lxc,
but not easy in lxd.
- create separate bridges for each container, e.g with /30 subnet
- use 'external' bridge managed by openvswitch, with additional
configuration (on openvswitch side) to enforce the rule. IIRC there
were examples on this list to do that (try searching the archives)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] User Mode Linux within a Linux Container

2018-01-30 Thread Fajar A. Nugraha
On Wed, Jan 31, 2018 at 2:54 AM, Pablo Pessolani  wrote:
> Does anybody has run User Mode Linux (UML) within a Linux Container?
>
> And several UMLs within several Containers? (one UML  by Container)
>
> Is there any limitation so that this can not be done?


If you're doing this for research purposes, I say 'try it and report
the result'. I've had success running openvpn (which use tun/tap
adapter that UML also need) even inside unprivileged container, with
minor changes to the systemd unit to enable autostart. I've also run
virtualbox in privileged containers (although in this case IIRC I had
to disable/modify apparmor/seccomp/dropped capabilities, which would
make it undesirable for 'production' uses). My GUESS is that UML will
behave similar to openvpn (since it doesn't require any special kernel
module other than tun/tap).

If you're doing this for performance / security / privilege separation
purposes, I suggest don't do that. Possibly look into nested
containers instead.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-01-30 Thread Fajar A. Nugraha
On Tue, Jan 30, 2018 at 7:34 PM, Harald Dunkel  wrote:
> Hi folks,
>
> I have removed the lxcfs package by accident, while the containers
> are still running.

> Is there some way to recover without restaring the containers?

I'm pretty sure the answer is "no". Even lxcfs package no longer
automatically restart itself during upgrade.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] storage pool on zfs root not possible ?

2017-12-12 Thread Fajar A. Nugraha
Sorry, hit send to soon. Here's the correctly-edited response

On Wed, Nov 22, 2017 at 5:36 PM, supp...@translators.at <
supp...@translators.at> wrote:

>
>
> hi folks,
>
> have a zfs-root-raid system with
> dedicated zfsroot/opt partitioned.
> upon zfs-root running ubuntu 16.04 lts.
> already installed snap-lxd.
>
> now want to attach containers to zfsroot/opt.
> no chance because of :
> «error: custom loop file locations are not supported»
>
> then tried
> lxc storage create testpool dir source=zfsroot
> «error: custom loop file locations are not supported»
>
> any idea?
>


Snap has restricted access to filesystem, limiting what directories it can
access.

On ubuntu 16.04, you can specify custom loop file location with newer lxd
package (e.g. "apt install -t xenial-backports lxd") instead of snap:
lxc storage create testpool dir source=/zfsroot/opt

Note that you need to specify the directory/mount point when using "dir",
not the dataset. And "dir" might not be what you want, look at "zfs" driver
instead.



> regards
> karl
>
> ps.: if you run zfs-root with auto-snapshots and so on,
> why do lxd nesting zfs storage in those tricky ways?
>

You mean "why lxd use zfs on top of image on top of zfs root"?
You don't have to. Loopback zfs is the default setup that would fit
"beginners" usage scenario, but should be change for production (e.g. using
zfs driver, lvm, or btrfs)


> other way round:
> wouldn't it be better during «lxd init» to be asked whether
> you want to have zfs additionally or not? plain  is
>

If you use lxd package, you can just run
lxc storage create testpool zfs source=zfsroot/opt

Note that in this case you specifiy the pool or dataset name, not the
mountpoint.

not the answer. basically it doesn't matter what you choose
> everythings will be in the /var/snap/lxd - cage. the
>

That is because you use snap.


> consequences are put lxd on extra disk. but what when
> not possible?
>

Don't use snap for that purpose. Use normal lxd package.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] storage pool on zfs root not possible ?

2017-12-12 Thread Fajar A. Nugraha
On Wed, Nov 22, 2017 at 5:36 PM, supp...@translators.at <
supp...@translators.at> wrote:

>
>
> hi folks,
>
> have a zfs-root-raid system with
> dedicated zfsroot/opt partitioned.
> upon zfs-root running ubuntu 16.04 lts.
> already installed snap-lxd.
>
> now want to attach containers to zfsroot/opt.
> no chance because of :
> «error: custom loop file locations are not supported»
>
> then tried
> lxc storage create testpool dir source=zfsroot
> «error: custom loop file locations are not supported»
>
> any idea?
>
>

For dir, snap has limited access to filesystem.

You can do this with lxd package (e.g. "apt install -t xenial-backports
lxd") instead of snap:
lxc storage create testpool dir source=/zfsroot/opt

Note that you need to specify the directory/mount point when using "dir",
not the dataset.


For your purpose, however, I think you want





> regards
> karl
>
> ps.: if you run zfs-root with auto-snapshots and so on,
> why do lxd nesting zfs storage in those tricky ways?
>


You mean "why do lxd use zfs on top of image on top of zfs root"?
You don't have to. Loopback zfs is the default setup that would fit
"beginners" usage scenario, but should be change for production (e.g. using
zfs driver, lvm, or btrfs)


> other way round:
> wouldn't it be better during «lxd init» to be asked whether
> you want to have zfs additionally or not? plain  is
>

If you use lxd packages, you can just run
lxc storage create testpool zfs source=zfsroot/opt

Note that in this case you specifiy the pool or dataset name, not the
mountpoint.



> not the answer. basically it doesn't matter what you choose
> everythings will be in the /var/snap/lxd - cage. the
>


That is because you use snap.


> consequences are put lxd on extra disk. but what when
> not possible?
>


Don't use snap for that purpose. Use normal lxd package.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trouble with automounting /dev/shm

2017-11-24 Thread Fajar A. Nugraha
On Sat, Nov 25, 2017 at 5:59 AM, Pavol Cupka  wrote:

> can you have multiline raw.lxc ?
>
>
Yes. See https://github.com/lxc/lxd/issues/2343#issuecomment-245102205 for
example

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Using a mounted drive to handle storage pool

2017-11-21 Thread Fajar A. Nugraha
On Wed, Nov 22, 2017 at 1:37 AM, Lai Wei-Hwa  wrote:

> I've currently migrated LXD from canonical PPA to Snap.
>
> I have 2 RAIDS:
>
>- /dev/sda - ext4 (this is root device)
>- /dev/sdb - brtfs (where I want my pool to be with the containers and
>snapshots)
>
> How/where should I mount my btrfs device? What's the best practice in
> having the pool be in a non-root device?
>
>

You can simply use 'lxc storage create' to create a new storage pool (needs
newer lxd version, not 2.0.x):
https://github.com/lxc/lxd/blob/master/doc/storage.md#btrfs



> There are a few approaches I can see
>
>1. mount /dev/sdb to /var/snap/lxd (or /var/lib/lxd - if you're using
>PPA) ... then: lxd init
>
>
snap complicates that. I'm not sure which directories are available for
snap. It MIGHT work if you specify the block device directly and let lxd
choose the best mount point.


> Also, I'd love it if LXD could make this a little easier and let users
> more easily define where the storage pool will be located.
>
>

That's what 'lxd storage create' does.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-16 Thread Fajar A. Nugraha
On Thu, Nov 16, 2017 at 10:50 PM, Saint Michael  wrote:

> The issue is with fuse, that is why I keep
> lxc.autodev=0
> if I do not, if I set it to 1, then fuse does not mount inside a
> container. I need fuse, for I mount an FTP server inside the container.
> So I am caught between a rock and a hard place.
> I akready asked about this contradiction on the LXC developers list.
>
>

I use fuse (for clipboard and file copy/paste support on xrdp) on
privileged lxd container. Works fine.
Can't comment more about the old lxc though, since all my newer systems are
using lxd.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-16 Thread Fajar A. Nugraha
On Thu, Nov 16, 2017 at 10:04 PM, Saint Michael  wrote:

> I missfired.
> But I found the culprit, it is
> lxc.autodev = 0
>
> if I use
> lxc.autodev = 1
> the issue does not happens
> Can somebodu shed any light on the ramifications of this?
>

Try https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html
, look for 'CONSOLE DEVICES' and 'lxc.autodev'.


> Some additional information: I use fuse inside my containers.
>
>
One the reasons I suggested using lxd, is that with the default lxd setup,
you'd be less-likely to shoot-yourself-in-the-foot.

Fuse complicates things a little, since you need a privileged container to
use it. But even when using privileged container, the default lxd setup
(using templates) is still good-enough to prevent common problems created
by host-container interaction.


-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY issue

2017-11-15 Thread Fajar A. Nugraha
On Thu, Nov 16, 2017 at 10:04 AM, Saint Michael  wrote:

> I did apply all suggested solutions that you found googling. None works.
> I do not use LXD, just plain LXC.
> lxc-start --version
> 2.0.9
> lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:Ubuntu 16.04.3 LTS
> Release:16.04
> Codename:   xenial
>
>
>
Short version: use root-owned unprivileged containers.
If you don't know how to do that (or think it's too troublesome to
configure with lxc), then just use LXD (which use root-owned unprivileged
containers by default).

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Number of core for a container

2017-11-10 Thread Fajar A. Nugraha
Disclaimer: I no longer use lxc. Mostly lxd nowadays.

I can point you to some documentation on the web though, hopefully that
helps

On Fri, Nov 10, 2017 at 7:29 PM, Thouraya TH  wrote:

> Hi all,
>
> I have used this command to fix cpu cores on which we have set running
> containers.
>
> root@g-105:~# lxc-cgroup -n worker cpuset.cpus 0
> root@g-105:~# lxc-cgroup -n worker cpuset.cpus 1
> root@g-105:~# lxc-cgroup -n worker cpuset.cpus 2
> root@g--105:~# lxc-cgroup -n worker cpuset.cpus 3
>
> root@g-105:~# lxc-cgroup -n worker cpuset.cpus 4
> lxc_container: cgmanager.c: do_cgm_set: 1022 Error setting cgroup value
> cpuset.cpus for cpuset:/lxc
> lxc_container: cgmanager.c: do_cgm_set: 1023 call to
> cgmanager_set_value_sync failed: invalid request
> lxc_container: lxc_cgroup.c: main: 103 failed to assign '4' value to
> 'cpuset.cpus' for 'worker'
>
> --> i have only 4 cores
>


... and what is your question here, if any? The cpuset cgroup is working as
intended, since there's no 'cpu 4' on your system.


>
> root@g-105:~# lxc-cgroup -n workerTest cpuset.cpus 0,1,2,3
> root@g-105:~# lxc-cgroup -n workerTest cpuset.cpus 0,1,2
>
> Please, is there a command to display containers and list of cores used by
> each container ?
>

Probably the combination of lxc-ls and lxc-cgroup.

'lxc-cgroup -n workerTest cpuset.cpus' will show you its current values.
See
- http://man7.org/linux/man-pages/man1/lxc-cgroup.1.html
- https://linuxcontainers.org/lxc/manpages/man1/lxc-ls.1.html



> A second question please,  can i fix that only container 1 uses  cpu 2,3 
> *(exclusive
> use)* or not ? i.e container 2 CANNOT use cpu 2,3
>
>

Try https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html,
'control group' section. And
https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt, search
for 'cpuset.cpu_exclusive'.
You could probably try (in container 1's config)

lxc.cgroup.cpuset.cpu_exclusive = 1
lxc.cgroup.cpuset.cpus = 2,3

It might work if no other cgroups currently explicitly use cpu 2-3.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu bionic beaver 18.04 (pre-alpha?) and lxc (experiments)

2017-11-08 Thread Fajar A. Nugraha
On Thu, Nov 9, 2017 at 7:49 AM, Adrian Pepper  wrote:

> I installed Ubuntu 18.04 in a virtualbox, and then installed lxc (lxc
> 2.1.1)
>
> ii  lxc1   2.1.1-0ubuntu1  amd64Linux Containers userspace
> tools
>
>
> I then created an 18.04 container on the virtualbox.
> And did a little with it.
>
> Based on
> root@scspc578-u1804-20171108:/usr/share/debootstrap/scripts$ ls -ld bionic
> lrwxrwxrwx 1 root root 5 Oct 25 22:30 bionic -> gutsy
> root@scspc578-u1804-20171108:/usr/share/debootstrap/scripts$
>
> I created that same symlink on 16.04 (lxc 2.1.0)
>
> ii  lxc1   2.1.0-0ubuntu1~ubuntu16.04.1~ppa1   amd64   Linux Containers
> userspace tools
>
>

Probably debootstrap needs to be updated



>
> That appears to have allowed
>
> lxc-create -n u1804test -t ubuntu -- -r bionic
>
> to work on 16.04.  (and the container to start and run, and claim to be
> 18.04  etc).
> (I don't see any waiting apt updates to lxc...)
>
>
> No disasters so far.
>
>

I see bionic on http://images.linuxcontainers.org/ . You should be able to
install lxd, and simply run
lxc launch images:ubuntu/bionic/amd64 test-bionic

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is there a reference manual about LXD ?

2017-10-20 Thread Fajar A. Nugraha
On Sat, Oct 21, 2017 at 7:34 AM, Pierre Couderc  wrote:

> Sorry,  I have not fount it.
>
> I have installed LXD on stretch following Stéphane
> https://stgraber.org/2017/01/18/lxd-on-debian.
>
> Fine ! an infinite  progress after my successful install of lxc on jessie,
> it seems to me 20 years ago, following  :
>
> https://myles.sh/configuring-lxc-unprivileged-containers-in-debian-jessie/
>
>
> But, this LXD was a trial. I have installed, it works immediately !
>
> But using all defaults, it is not what I need and I want to reinstall it :
>
> 1-Is there a reference manual about LXD so that  I ask for help here after
> RTFM and not before as now...
>
>
Note that it's MUCH easier to use lxd on ubuntu 16.04, with
xenial-backports to get the 'best' combination of 'new features' and
'tested'. It has lxd 2.18, with support for storage pools. If you're using
this version, the most relevant documentation would be from git master
branch: https://github.com/lxc/lxd/tree/master/doc

If you're using it for production and want long term support, use the
default xenial repository instead (not backports), which has lxd 2.0.x.
It's supported for longer time, but doesn't have new features (like storage
pools). The relevant docs for this version is either
https://github.com/lxc/lxd/tree/stable-2.0/doc or
https://help.ubuntu.com/lts/serverguide/lxd.html



> 2- How do I erase my first trial : I try to reinit but i says me that :
>
> The requested storage pool "default" already exists. Please choose another
> name.
>
> How do I erase the the  storage pool "default" ?
>

Might be hard if you're using file-backed zfs-pool. On ubuntu it's probably
something like this:
- systemctl disable lxd
- reboot
- rename /var/lib/lxd to something else, then create an empty /var/lib/lxd
- systemctl enable lxd
- systemctl start lxd
- lxd init

I'm not sure how the path and startup script would translate to debian +
lxd from snapd (which is in the link you mentioned)



>
> 3- My true problem is that I do not want the NAT for my new lxc containers
> but that they use the normal addresses on my local network. How do I do
> that ?
>
>
The usual way:
- create your own bridge, e.g. br0 in
https://help.ubuntu.com/community/NetworkConnectionBridge (that example
bridges eth0 and eth1 on the same bridge. use the relevant public interface
for your setup)
- configure your container (or profile) to use it (replacing the default
lxdbr0).
- no need to delete existing lxdbr0, just leave it as is.


The 'new' way: looking at
https://github.com/lxc/lxd/blob/master/doc/networks.md , it should be
possible to create the bridge using 'lxc network create ...'



> And how do I assign them a MAC address so they  are accessible from the
> internet.
>
>
This depends on your setup.

For example, if you rent dedicated server from serverloft (or other
providers with similar networking setup), they do NOT allow bridging of VMs
to the public network. You need to setup routing instead (long story).

But if you're on a LAN, then 'making the containers be on the same LAN is
the host' is as simple as 'configure the container to use br0' (or whatever
bridge you create above). If the LAN has a DHCP server, then the container
will automatically get a 'public' IP addres. If not, then configure it
statically (just like how you configure a normal linux host)

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How properly to find what consumes memory inside the container.

2017-10-11 Thread Fajar A. Nugraha
On Tue, Sep 19, 2017 at 11:20 AM, Ivan Kurnosov  wrote:

>
> But if I clear the system caches on the host
>
> echo 3 > /proc/sys/vm/drop_caches
>
>
> the container memory consumption drops to the expected <100mb.
>
> So the question, how to monitor the memory consumption from the container
> reliably?
>

As in 'know what the kernel thinks the container is using? You're doing it
already.

If you use lxcfs (which should be the case on ubuntu), /proc entries inside
containers are fake, created using data from the relevant cgroups.



> And why does `free` count caches as used memory inside container?
>
>
>

>From https://www.kernel.org/doc/Documentation/cgroup-v2.txt:
"
Memory
--

The "memory" controller regulates distribution of memory.  Memory is
stateful and implements both limit and protection models.  Due to the
intertwining between memory usage and reclaim pressure and the
stateful nature of memory, the distribution model is relatively
complex.

While not completely water-tight, all major memory usages by a given
cgroup are tracked so that the total memory consumption can be
accounted and controlled to a reasonable extent.  Currently, the
following types of memory usages are tracked.

- Userland memory - page cache and anonymous memory.

- Kernel data structures such as dentries and inodes.

- TCP socket buffers.

The above list may expand in the future for better coverage.

...
Memory Ownership


A memory area is charged to the cgroup which instantiated it and stays
charged to the cgroup until the area is released.  Migrating a process
to a different cgroup doesn't move the memory usages that it
instantiated while in the previous cgroup to the new cgroup.

A memory area may be used by processes belonging to different cgroups.
To which cgroup the area will be charged is in-deterministic; however,
over time, the memory area is likely to end up in a cgroup which has
enough memory allowance to avoid high reclaim pressure.

If a cgroup sweeps a considerable amount of memory which is expected
to be accessed repeatedly by other cgroups, it may make sense to use
POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
belonging to the affected files to ensure correct memory ownership.
"



An interesting question would be "can you exclude page cache from cgroup
memory accounting". I don't think that's possible though.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Filtering container traffic with iptables on host

2017-09-29 Thread Fajar A. Nugraha
On Fri, Sep 29, 2017 at 7:23 PM, Björn Fischer 
wrote:

> root@drax:/root# lxc shell moonraker
>

Wow

# lxc | egrep 'shell|exec'
  exec Execute commands in containers

'shell' is not even in the lxc command line help yet :)
Thanks for letting me know that command exists.



> [ iptables rule is active but has no effect on ping in container ]
> --snip--
>
> Certainly I am missing something very obvious.
> If anyone could point me in the right direction,
> I would appreciate that.
>
>
My GUESS is that iptables treat container traffic as separate host, due to
being in separate network namespace. So the host has no idea what PID the
ping traffic is from.

The host only knows that the traffic comes from a veth* interface, which is
attached to lxdbr0, and then it needs to FORWARD it to eth0 (or whatever
your host's public interfaces is). So this should work

iptables -I FORWARD -s 10.0.160.33 -p ICMP -j DROP

OUTPUT and INPUT won't work, FORWARD does. Of course, cgroups won't wont
with FORWARD, so you need to find a criteria (e.g. source IP) that does.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

  1   2   3   4   5   6   7   8   >