[lxc-users] basic security questions
I was wondering what is the best way to employ some basic security for lxc containers. On the host, I'm running Ubuntu 14.04, lxc 1.0.7 with kernel 3.18.5. 1. root user in lxc containers is able to view dmesg, even with: host# cat /proc/sys/kernel/dmesg_restrict 1 2. lxc containers are able to write to /proc/sysrq-trigger - so can technically poweroff the host: guest# echo w > /proc/sysrq-trigger guest# dmesg 3. /proc/kcore? And perhaps anything else which might need blocking so that the guest is not able to read data from the host/other guests? -- Tomasz Chmielewski http://www.sslrack.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] basic security questions
On 2015-02-01 00:01, Tamas Papp wrote: On 01/31/2015 03:46 PM, Tomasz Chmielewski wrote: I was wondering what is the best way to employ some basic security for lxc containers. On the host, I'm running Ubuntu 14.04, lxc 1.0.7 with kernel 3.18.5. 1. root user in lxc containers is able to view dmesg, even with: host# cat /proc/sys/kernel/dmesg_restrict 1 Use non-privileges containers. How do I do this? I've created my container with: lxc-create --template download --name container-name -B btrfs "man lxc-create" does not contain "priv" string. 2. lxc containers are able to write to /proc/sysrq-trigger - so can technically poweroff the host: guest# echo w > /proc/sysrq-trigger guest# dmesg 3. /proc/kcore? And perhaps anything else which might need blocking so that the guest is not able to read data from the host/other guests? These two should be denied by apparmor, unless you run containers with unconfined apparmor profile. Is it documented anywhere? Google search for "/proc/kcore site:linuxcontainers.org" does not seem to return any related documentation (though I've seen a similar question sent a few years ago, without any specific answers). -- Tomasz Chmielewski http://www.sslrack.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] writeback cache for all container processes?
Is it possible to start a lxc container with "writeback cache", in a way similar to KVM's writeback cache? From "man kvm": cache=writeback It will report data writes as completed as soon as the data is present in the host page cache. This is safe as long as your guest OS makes sure to correctly flush disk caches where needed. If your guest OS does not handle volatile disk write caches correctly and your host crashes or loses power, then the guest may experience data corruption. -- Tomasz Chmielewski http://www.sslrack.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] writeback cache for all container processes?
On 2015-02-02 21:13, Fajar A. Nugraha wrote: You do know that lxc share the same kernel instance as the host OS, making such settings not applicable? Why not? Perhaps I wasn't very specific when starting the thread. It's certainly possible to do "not applicable" kinds of things with processes and their page cache, i.e.: https://code.google.com/p/pagecache-mangagement/ Or here, disabling O_DIRECT and sync would be sort of matching "feature-wise" with KVM's cache=writeback: http://www.mcgill.org.za/stuff/software/nosync Is it possible to set things like this for all processes in a given lxc container? -- Tomasz Chmielewski http://www.sslrack.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] writeback cache for all container processes?
On 2015-02-02 21:37, Fajar A. Nugraha wrote: It's certainly possible to do "not applicable" kinds of things with processes and their page cache, i.e.: https://code.google.com/p/pagecache-mangagement/ [1] Or here, disabling O_DIRECT and sync would be sort of matching "feature-wise" with KVM's cache=writeback: http://www.mcgill.org.za/stuff/software/nosync [2] Is it possible to set things like this for all processes in a given lxc container? What are you trying to achieve? I'm trying to achieve the equivalent of KVM's cache=writeback (or, "libeatmydata / nosync") for the whole container. If you want to disable sync for the container, the best you can do is probably use some filesystem that can do so. For example, zfs has "sync=disabled" per-dataset settings. So you can have "sync=standard" for filesystems used by the host, and "sync=disabled" for filesystems used by containers. That's a weird advice, given that lxc are Linux containers, and ZFS is not in Linux kernel (I know that there are some 3rd party porting attempts, but it's not really applicable in many situations). -- Tomasz Chmielewski http://www.sslrack.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] kernel crash when starting an unprivileged container
I'm trying to start an unprivileged container on Ubuntu 14.04; unfortunately, the kernel crashes. # lxc-create -t download -n test-container (...) Distribution: ubuntu Release: trusty Architecture: amd64 (...) # lxc-start -n test-container -F Kernel crashes at this point. It does not crash if I start the container as privileged. - kernel used is 4.0.4-040004-generic from http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily/ - lxc userspace: http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu # dpkg -l|grep lxc ii liblxc1 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools (library) ii lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools ii lxc-templates 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools (templates) ii lxcfs 0.7-0ubuntu4~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii python3-lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools (Python 3.x bindings) It's a bit hard to get the printout of the OOPS, as I'm only able to access the server remotely and it doesn't manage to write the OOPS to the log. Anyway, after a few crashes and "while true; do dmesg -c ; done" I was able to capture this: [ 237.706914] device vethPI4H7F entered promiscuous mode [ 237.707006] IPv6: ADDRCONF(NETDEV_UP): vethPI4H7F: link is not ready [ 237.797284] eth0: renamed from veth1OSOTS [ 237.824526] IPv6: ADDRCONF(NETDEV_CHANGE): vethPI4H7F: link becomes ready [ 237.824556] lxcbr0: port 1(vethPI4H7F) entered forwarding state [ 237.824562] lxcbr0: port 1(vethPI4H7F) entered forwarding state [ 237.928179] BUG: unable to handle kernel NULL pointer dereference at (null) [ 237.928262] IP: [] pin_remove+0x58/0xf0 [ 237.928318] PGD 0 [ 237.928364] Oops: 0002 [#1] SMP [ 237.928432] Modules linked in: xt_conntrack veth xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_tcpudp iptable_filter ip_tables x_tables bridge stp llc intel_rapl iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul eeepc_wmi ghash_clmulni_intel aesni_intel asus_wmi sparse_keymap ie31200_edac aes_x86_64 edac_core lrw gf128mul glue_helper shpchp lpc_ich ablk_helper cryptd mac_hid 8250_fintek serio_raw tpm_infineon video wmi btrfs lp parport raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq e1000e raid1 ahci raid0 ptp libahci pps_core multipath linear [ 237.930151] CPU: 2 PID: 6568 Comm: lxc-start Not tainted 4.0.4-040004-generic #201505171336 [ 237.930188] Hardware name: System manufacturer System Product Name/P8B WS, BIOS 0904 10/24/2011 [ 237.930225] task: 880806970a00 ti: 8808090c8000 task.ti: 8808090c8000 [ 237.930259] RIP: 0010:[] [] pin_remove+0x58/0xf0 [ 237.930341] RSP: 0018:8808090cbe18 EFLAGS: 00010246 [ 237.930383] RAX: RBX: 880808808a20 RCX: dead00100100 [ 237.930429] RDX: RSI: dead00200200 RDI: 81f9a548 [ 237.930474] RBP: 8808090cbe28 R08: 81d11b60 R09: 0100 [ 237.930572] R13: 880806970a00 R14: 81ecd070 R15: 7ffe57fd5540 [ 237.930618] FS: 7fd448c0() GS:88082fa8() knlGS: [ 237.930685] CS: 0010 DS: ES: CR0: 80050033 [ 237.930728] CR2: CR3: 0008099c1000 CR4: 000407e0 [ 237.930773] Stack: [ 237.930809] 880806970a00 880808808a20 8808090cbe48 8121d0f2 [ 237.930957] 8808090cbe68 880808808a20 8808090cbea8 8122fa55 [ 237.931123] 880806970a00 810bb2b0 8808090cbe70 [ 237.931286] Call Trace: [ 237.931336] [] drop_mountpoint+0x22/0x40 [ 237.931380] [] pin_kill+0x75/0x130 [ 237.931425] [] ? prepare_to_wait_event+0x100/0x100 [ 237.931471] [] mnt_pin_kill+0x29/0x40 [ 237.931530] [] cleanup_mnt+0x80/0x90 [ 237.931573] [] __cleanup_mnt+0x12/0x20 [ 237.931617] [] task_work_run+0xb7/0xf0 [ 237.931662] [] do_notify_resume+0xbc/0xd0 [ 237.931709] [] int_signal+0x12/0x17 -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] kernel crash when starting an unprivileged container
On 2015-06-03 15:01, Tomasz Chmielewski wrote: I'm trying to start an unprivileged container on Ubuntu 14.04; unfortunately, the kernel crashes. # lxc-create -t download -n test-container (...) # lxc-start -n test-container -F Kernel crashes at this point. It does not crash if I start the container as privileged. - kernel used is 4.0.4-040004-generic from http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily/ The issue was a bit weird: - I've updated the kernel to 4.1-rc6, no longer crashing - still, the container was not starting on 4.1-rc6 - it turned out that "lxc-create -t download ..." created the container with all files being 0-bytes for some reason (so, 0-byte /sbin/init and all other files being 0-byte) - "exec file format" (0-byte /sbin/init) was causing 4.0.4 kernel crash? Anyway, problem solved. -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd: "-B backingstore" equivalent?
Is there a "-B btrfs" equivalent in lxd? For example, with lxc, I would use: # lxc-create --template download --name test-container -B btrfs -B backingstore 'backingstore' is one of 'dir', 'lvm', 'loop', 'btrfs', 'zfs', or 'best'. The default is 'dir', meaning that the container root filesystem will be a directory under /var/lib/lxc/container/rootfs. How can I do the same with lxd (lxc command)? It seems to default to "dir". # lxc launch images:ubuntu/trusty/amd64 test-container -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd: "-B backingstore" equivalent?
On 2015-06-05 22:58, Tycho Andersen wrote: Hi Tomasz, On Fri, Jun 05, 2015 at 07:22:25PM +0900, Tomasz Chmielewski wrote: Is there a "-B btrfs" equivalent in lxd? Yes, if you mount /var/lib/lxd as a btrfs subvolume, it should Just Work. As I've checked, this is not the case (the container is created in a directory, not in btrfs subvolume; lxc-create -B btrfs creates it in a subvolume). lxd 0.9-0ubuntu2~ubuntu14.04.1~ppa1 -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd: "-B backingstore" equivalent?
On 2015-06-06 00:00, Tycho Andersen wrote: As I've checked, this is not the case (the container is created in a directory, not in btrfs subvolume; lxc-create -B btrfs creates it in a subvolume). Can you file a bug with info to reproduce? It should work as of 0.8. Before I file a bug report - that's how it works for me - /var/lib/lxd/ is a symbolic link to /srv/lxd, placed on a btrfs filesystem: # ls -l /var/lib/lxd lrwxrwxrwx 1 root root 8 Jun 5 10:15 /var/lib/lxd -> /srv/lxd # mount|grep /srv /dev/sda4 on /srv type btrfs (rw,noatime,device=/dev/sda4,device=/dev/sdb4,compress=zlib) # lxc launch images:ubuntu/trusty/amd64 test-image Creating container...done Starting container...done error: exit status 1 Note that it errored when trying to start the container - I have to add "lxc.aa_allow_incomplete = 1"; otherwise, it won't start (is there some /etc/lxc/default.conf equivalent for lxd, where this could be set?). However, the container is already created in a directory, so I don't think the above error matters: # btrfs sub list /srv|grep lxd # btrfs sub list /srv|grep test-image # -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd: "-B backingstore" equivalent?
On 2015-06-06 00:19, Tycho Andersen wrote: # ls -l /var/lib/lxd lrwxrwxrwx 1 root root 8 Jun 5 10:15 /var/lib/lxd -> /srv/lxd Ah, my best guess is that lxd doesn't follow the symlink correctly when detecting filesystems. Whatever the cause, if you file a bug we'll fix it, thanks. Can you point me to the bug filing system for linuxcontainers.org? The closest to "contributing" seems to be here: https://linuxcontainers.org/lxd/contribute/ but don't see any "report an bug", "issue tracker" or anything similar. -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] kernel crash when starting an unprivileged container
It may be worth trying, but it won't work reliably for most kernel crashes (network, disk IO etc. may crash as well). -- Tomasz Chmielewski http://wpkg.org On 2015-06-10 14:11, Christoph Lehmann wrote: As a side note, you can use rsyslogs remotelogging to get the oops Am 3. Juni 2015 08:01:22 MESZ, schrieb Tomasz Chmielewski : I'm trying to start an unprivileged container on Ubuntu 14.04; unfortunately, the kernel crashes. # lxc-create -t download -n test-container (...) Distribution: ubuntu Release: trusty Architecture: amd64 (...) # lxc-start -n test-container -F Kernel crashes at this point. It does not crash if I start the container as privileged. - kernel used is 4.0.4-040004-generic from http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.4-wily [1]/ - lxc userspace: http://ppa.launchpad.net/ubuntu-lxc/stable/ubuntu [2] # dpkg -l|grep lxc ii liblxc1 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools (library) ii lxc ! 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools ii lxc-templates 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools (templates) ii lxcfs 0.7-0ubuntu4~ubuntu14.04.1~ppa1 amd64 FUSE based filesystem for LXC ii python3-lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools (Python 3.x bindings) It's a bit hard to get the printout of the OOPS, as I'm only able to access the server remotely and it doesn't manage to write the OOPS to the log. Anyway, after a few crashes and "while true; do dmesg -c ; done" I was able to capture this: [ 237.706914] device vethPI4H7F entered promiscuous mode [ 237.707006] IPv6: ADDRCONF(NETDEV_UP): vethPI4H7F:! link is not ready [ 237.797284] eth0: renamed from veth1OSOTS [ 237.824526] IPv6: ADDRCONF(NETDEV_CHANGE): vethPI4H7F: link becomes ready [ 237.824556] lxcbr0: port 1(vethPI4H7F) entered forwarding state [ 237.824562] lxcbr0: port 1(vethPI4H7F) entered forwarding state [ 237.928179] BUG: unable to handle kernel NULL pointer dereference at (null) [ 237.928262] IP: [] pin_remove+0x58/0xf0 [ 237.928318] PGD 0 [ 237.928364] Oops: 0002 [#1] SMP [ 237.928432] Modules linked in: xt_conntrack veth xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_tcpudp iptable_filter ip_tables x_tables bridge stp llc intel_rapl iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul eeepc_wmi ghash_clmulni_intel aesni_intel asus_wmi sparse_keymap ie31200_edac aes_x86_64 edac_core lrw gf128mul glue_helper shpchp lpc_ich ablk_helper cryptd mac_hid 8250_fintek serio_raw tpm_infineon video wmi btrfs lp parport raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq e1000e raid1 ahci raid0 ptp libahci pps_core multipath linear [ 237.930151] CPU: 2 PID: 6568 Comm: lxc-start Not tainted 4.0.4-040004-generic #201505171336 [ 237.930188] Hardware name: System manufacturer System Product Name/P8B WS, BIOS 0904 10/24/2011 [ 237.930225] task: 880806970a00 ti: 8808090c8000 task.ti: 8808090c8000 [ 237.930259] RIP: 0010:[] [] pin_remove+0x58/0xf0 [ 237.930341] RSP: 0018:8808090cbe18 EFLAGS: 00010246 [ 237.930383] RAX: RBX: 880808808a20 RCX: dead00100100 [ 237.930429] RDX: RSI: dead002! 00200 RDI: 81f9a548 [ 237.930474] RBP: 8808090cbe28 R08: 81d11b60 R09: 0100 [ 237.930572] R13: 880806970a00 R14: 81ecd070 R15: 7ffe57fd5540 [ 237.930618] FS: 7fd448c0() GS:88082fa8() knlGS: [ 237.930685] CS: 0010 DS: ES: CR0: 80050033 [ 237.930728] CR2: CR3: 0008099c1000 CR4: 000407e0 [ 237.930773] Stack: [ 237.930809] 880806970a00 880808808a20 8808090cbe48 8121d0f2 [ 237.930957] 8808090cbe68 880808808a20 8808090cbea8 8122fa55 [ 237.931123] 880806970a00 810bb2b0 8808090cbe70 [ 237.931286] Call Trace: [ 237.931336] [] drop_mountpoint+0x22/0x40 [ 237.931380] [] pin_kill+0x75/0x130 [ 237.931425] [] ? prepare_to_wait_event+0x100/0x100 [ 237.931471] [] mnt_pin_kill+0x29/0x40 [ 237.931530] [] cleanup_mnt+0x80/0x90 [ 237.931573] [] __cleanup_mnt+0x12/0x20 [ 237.931617] [] task_work_run+0xb7/0xf0 [ 237.931662] [] do_notify_resume+0xbc/0xd0 [ 237.931709] [] int_signal+0x12/0x17 -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] "mesh networking" for lxc containers (similar to weave)?
Are there any solutions which would let one build "mesh networking" for lxc containers, similar to what weave does for docker? Assumptions: - multiple servers (hosts) which are not in the same subnet (i.e. in different DCs in different countries), - containers share the same subnet (i.e. 10.0.0.0/8), no matter on which host they are running - if container is migrated to a different host, it is still reachable on the same IP address without any changes in the networking I suppose the solution would run only once on each of the hosts, rather than in each container. Is there something similar for lxc? -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] "mesh networking" for lxc containers (similar to weave)?
I know this is just "normal networking", however, there are at least two issues with your suggestions: - it assumes the hosts are in the same subnet (say, connected to the same switch), so it won't work if the hosts have two different public IPs (i.e. 46.1.2.3 and 124.8.9.10) - with just two hosts, you may overcome the above limitation with some VPN magic; however, it becomes problematic as the number of hosts grows (imagine 10 or more hosts, trying to set it up without SPOF / central VPN server; ideally, the hosts should talk to themselves using the shortest paths possible) Therefore, I'm asking if there is any better "magic", as you say, for lxc networking? Possibly it could be achieved with tinc, running on hosts only - http://www.tinc-vpn.org/ - but haven't really used it. And maybe people have other ideas? -- Tomasz Chmielewski http://wpkg.org On 2015-06-20 03:20, Christoph Lehmann wrote: There is no magic with lxcs networking. Its just a bridge and some iptables rules for NAT and a dhcp server. You can setup a bridge on your public interface, configure the container to use that bridge and do the same on your second host. Am 19. Juni 2015 18:15:23 MESZ, schrieb Tomasz Chmielewski : Are there any solutions which would let one build "mesh networking" for lxc containers, similar to what weave does for docker? Assumptions: - multiple servers (hosts) which are not in the same subnet (i.e. in different DCs in different countries), - containers share the same subnet (i.e. 10.0.0.0/8 [1]), no matter on which host they are running - if container is migrated to a different host, it is still reachable on the same IP address without any changes in the networking I suppose the solution would run only once on each of the hosts, rather than in each container. Is there something similar for lxc? -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] creating device nodes in unprivileged containers?
In an unprivileged Ubuntu 14.04 container, I'm trying to run a program which needs to create device nodes. Unfortunately it fails: # pbuilder-dist trusty i386 create W: /root/.pbuilderrc does not exist I: Logging to /root/pbuilder/trusty-i386_result/last_operation.log I: Distribution is trusty. I: Current time: Wed Jul 1 07:25:49 UTC 2015 I: pbuilder-time-stamp: 1435735549 I: Building the build environment I: running debootstrap /usr/sbin/debootstrap mknod: '/var/cache/pbuilder/build/5377/./test-dev-null': Operation not permitted E: Cannot install into target '/var/cache/pbuilder/build/5377/.' mounted with noexec or nodev E: debootstrap failed W: Aborting with an error I: cleaning the build env I: removing directory /var/cache/pbuilder/build//5377 and its subdirectories So I've tried to add the following to container's config: lxc.cap.keep = CAP_MKNOD However, the container fails to start: lxc-start 1435737618.188 ERRORlxc_conf - conf.c:lxc_setup:3925 - Simultaneously requested dropping and keeping caps I don't see "mknod" dropped before in included configs: # grep -ri mknod /usr/share/lxc/config/* How can I let create custom device nodes? The host is running these versions: # dpkg -l|grep lxc ii liblxc1 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools (library) ii lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools ii lxc-templates 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools (templates) ii lxcfs 0.9-0ubuntu1~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii python3-lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1amd64Linux Containers userspace tools (Python 3.x bindings) -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] creating device nodes in unprivileged containers?
Really not possible? How do people run debootstrap, pbuilder? These tools are often parts of build systems, am I really the first one to try to run them in LXC? Tomasz Chmielewski http://wpkg.org On 2015-07-01 17:22, Janjaap Bos wrote: You cannot create devices from the container. You need to create them beforehand outside rootfs and bind mount them in the container config. This has been explained in detail on this list, so just do quick search for further info. This only concerns lxd deployments as far as I know. Op 1 jul. 2015 10:08 schreef "Tomasz Chmielewski" : In an unprivileged Ubuntu 14.04 container, I'm trying to run a program which needs to create device nodes. Unfortunately it fails: # pbuilder-dist trusty i386 create W: /root/.pbuilderrc does not exist I: Logging to /root/pbuilder/trusty-i386_result/last_operation.log I: Distribution is trusty. I: Current time: Wed Jul 1 07:25:49 UTC 2015 I: pbuilder-time-stamp: 1435735549 I: Building the build environment I: running debootstrap /usr/sbin/debootstrap mknod: '/var/cache/pbuilder/build/5377/./test-dev-null': Operation not permitted E: Cannot install into target '/var/cache/pbuilder/build/5377/.' mounted with noexec or nodev E: debootstrap failed W: Aborting with an error I: cleaning the build env I: removing directory /var/cache/pbuilder/build//5377 and its subdirectories So I've tried to add the following to container's config: lxc.cap.keep = CAP_MKNOD However, the container fails to start: lxc-start 1435737618.188 ERROR lxc_conf - conf.c:lxc_setup:3925 - Simultaneously requested dropping and keeping caps I don't see "mknod" dropped before in included configs: # grep -ri mknod /usr/share/lxc/config/* How can I let create custom device nodes? The host is running these versions: # dpkg -l|grep lxc ii liblxc1 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools (library) ii lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools ii lxc-templates 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools (templates) ii lxcfs 0.9-0ubuntu1~ubuntu14.04.1~ppa1 amd64 FUSE based filesystem for LXC ii python3-lxc 1.1.2-0ubuntu3~ubuntu14.04.1~ppa1 amd64 Linux Containers userspace tools (Python 3.x bindings) -- Tomasz Chmielewski http://wpkg.org [1] ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users [2] ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] creating device nodes in unprivileged containers?
On 2015-07-01 18:08, Fajar A. Nugraha wrote: On Wed, Jul 1, 2015 at 3:38 PM, Tomasz Chmielewski wrote: Really not possible? How do people run debootstrap, pbuilder? These tools Not as root inside an unprivileged container are often parts of build systems, am I really the first one to try to run them in LXC? pbuilder with fakeroot should work Unfortunately it doesn't: tomasz.staff.com@build01:~$ fakeroot /bin/bash root@build01:~# pbuilder-dist trusty i386 create (...) I: running debootstrap /usr/sbin/debootstrap mknod: '/var/cache/pbuilder/build/6474/./test-dev-null': Operation not permitted E: Cannot install into target '/var/cache/pbuilder/build/6474/.' mounted with noexec or nodev E: debootstrap failed W: Aborting with an error https://pbuilder.alioth.debian.org/#nonrootchroot Even when using the fakerooting method, pbuilder will run with root privilege when it is required. For example, when installing packages to the chroot, pbuilder will run under root privileg -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] negative memory usage?
I'm seeing this on F20 privileged container upgraded to F21. How come "free" reports negative memory usage? The host has 8 GB RAM. [root@f21 ~]# free totalusedfree shared buff/cache available Mem:8176120 -128224 81482161132 156128 8148216 Swap: 8386556 0 8386556 [root@f21 ~]# free -m totalusedfree shared buff/cache available Mem: 7984 180143985094818587957 1 1537957 Swap: 8189 08189 [root@f21 ~]# cat /proc/meminfo MemTotal:8176120 kB MemFree: 8148348 kB MemAvailable:8148348 kB Buffers: 0 kB Cached: 8180 kB SwapCached:0 kB Active: 1396684 kB Inactive:1129624 kB Active(anon): 82980 kB Inactive(anon): 996 kB Active(file):1313704 kB Inactive(file): 1128628 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 8386556 kB SwapFree:8386556 kB Dirty: 980 kB Writeback: 0 kB AnonPages: 82520 kB Mapped:69668 kB Shmem: 1132 kB Slab: 155164 kB SReclaimable: 124692 kB SUnreclaim:30472 kB KernelStack:3440 kB PageTables: 6416 kB NFS_Unstable: 0 kB Bounce:0 kB WritebackTmp: 0 kB CommitLimit:12474616 kB Committed_AS: 217232 kB VmallocTotal: 34359738367 kB VmallocUsed: 157000 kB VmallocChunk: 34359538464 kB HardwareCorrupted: 0 kB AnonHugePages: 4096 kB HugePages_Total: 0 HugePages_Free:0 HugePages_Rsvd:0 HugePages_Surp:0 Hugepagesize: 2048 kB DirectMap4k: 65524 kB DirectMap2M: 8323072 kB -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc remote add - password query?
Trying to add a remote server: # lxc remote add server02 https://server02:8443 Admin password for server02: What is the remote password, and where do I set it? "man lxc" is not too helpful here. -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc remote add - password query?
On 2015-08-05 17:57, Tomasz Chmielewski wrote: Trying to add a remote server: # lxc remote add server02 https://server02:8443 Admin password for server02: What is the remote password, and where do I set it? "man lxc" is not too helpful here. Sorry for the noise - found it: # lxc config set core.trust_password SECRET -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc move - "error: checkpoint failed"
s.c:lxc_cmd_handler:888 - peer has disconnected lxc_container 1438767798.811 DEBUGlxc_commands - commands.c:lxc_cmd_get_state:574 - 'nominatim' is in 'RUNNING' state lxc_container 1438767798.811 DEBUGlxc_commands - commands.c:lxc_cmd_handler:888 - peer has disconnected lxc_container 1438767798.811 ERRORlxc_container - lxccontainer.c:criu_ok:3799 - couldn't find devices.deny = c 5:1 rwm Log entries created on dp03: lxc_container 1438767791.422 WARN lxc_log - log.c:lxc_log_init:316 - lxc_log_init called with log already initialized lxc_container 1438767791.422 WARN lxc_confile - confile.c:config_pivotdir:1768 - lxc.pivotdir is ignored. It will soon become an error. lxc_container 1438767791.424 INFO lxc_confile - confile.c:config_idmap:1376 - read uid map: type u nsid 0 hostid 10 range 65536 lxc_container 1438767791.424 INFO lxc_confile - confile.c:config_idmap:1376 - read uid map: type g nsid 0 hostid 10 range 65536 -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc move - "error: checkpoint failed"
On 2015-08-06 22:16, Tycho Andersen wrote: The problem here is that CRIU can't support everything that LXD does just yet, so you have to set up some container specific configuration. I've just landed a branch that has a profile that does this for you, but unfortunately it isn't in any released version of LXD yet: https://github.com/lxc/lxd#how-can-i-live-migrate-a-container-using-lxd You can create your own copy of the migratable profile, it should look like this: name: migratable config: raw.lxc: | lxc.console = none lxc.cgroup.devices.deny = c 5:1 rwm lxc.seccomp = security.privileged: "true" And then the same set of commands listed above should work. Thanks for your reply. What would be the process of "converting" an existing container into a "migratable" one? -- Tomasz Chmielewski http;//wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?
I get the following when starting a container with lxd: Incomplete AppArmor support in your kernel If you really want to start this container, set lxc.aa_allow_incomplete = 1 in your container configuration file Where exactly do I set this with lxd? I don't really see a "config" file, like with lxc. Is it "metadata.yaml"? If so - how to set it there? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?
Thanks, it worked. How do I set other "lxc-style" values in lxd, like for example: lxc.network.ipv4 = 10.0.12.2/24 lxc.network.ipv4.gateway = 10.0.12.1 lxc.network.ipv6 = :::::55 lxc.network.ipv6.gateway = :2345:6789:::2 Same "lxc config set containername", i.e.: lxc config set x1 raw.lxc "lxc.network.ipv4 = 10.0.12.2/24" lxc config set x1 raw.lxc "lxc.network.ipv4.gateway = 10.0.12.1" lxc config set x1 raw.lxc "lxc.network.ipv6 = :::::55" lxc config set x1 raw.lxc "lxc.network.ipv6.gateway = :2345:6789:::2" Or is there some other, more recommended way? Tomasz On 2015-10-27 02:35, Serge Hallyn wrote: That's an ideal use for 'lxc.raw'. lxc config set x1 raw.lxc "lxc.aa_allow_incomplete=1" The lxc configuration for lxd containers is auto-generated on each container start, as is the apparmor policy. The contents of the 'lxc.raw' config item are appended to the auto-generated config. Quoting Tomasz Chmielewski (man...@wpkg.org): I get the following when starting a container with lxd: Incomplete AppArmor support in your kernel If you really want to start this container, set lxc.aa_allow_incomplete = 1 in your container configuration file Where exactly do I set this with lxd? I don't really see a "config" file, like with lxc. Is it "metadata.yaml"? If so - how to set it there? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?
Interesting - this doesn't really work and hangs lxd: 1) first try: root@srv7 ~ # lxc config set testct raw.lxc "lxc.network.ipv4=10.0.3.228/24" error: problem applying raw.lxc, perhaps there is a syntax error? root@srv7 ~ # 2) second try - it never returns: root@srv7 ~ # lxc config set testct raw.lxc "lxc.network.ipv4=10.0.3.228/24" (hangs here, no prompt) 3) in a different shell - also hangs and never returns: root@srv7 ~ # lxc list 4) this also hangs and never returns: root@srv7 ~ # service lxd stop In the log, I can see: lxc 1445956132.156 ERRORlxc_confile - confile.c:network_netdev:544 - network is not created for 'lxc.network.ipv4' = '10.0.3.228/.24' option lxc 1445956132.156 ERRORlxc_parse - parse.c:lxc_file_for_each_line:57 - Failed to parse config: lxc.network.ipv4=10.0.3.228/.24 Tomasz On 2015-10-27 10:02, Tomasz Chmielewski wrote: Thanks, it worked. How do I set other "lxc-style" values in lxd, like for example: lxc.network.ipv4 = 10.0.12.2/24 lxc.network.ipv4.gateway = 10.0.12.1 lxc.network.ipv6 = :::::55 lxc.network.ipv6.gateway = :2345:6789:::2 Same "lxc config set containername", i.e.: lxc config set x1 raw.lxc "lxc.network.ipv4 = 10.0.12.2/24" lxc config set x1 raw.lxc "lxc.network.ipv4.gateway = 10.0.12.1" lxc config set x1 raw.lxc "lxc.network.ipv6 = :::::55" lxc config set x1 raw.lxc "lxc.network.ipv6.gateway = :2345:6789:::2" Or is there some other, more recommended way? Tomasz On 2015-10-27 02:35, Serge Hallyn wrote: That's an ideal use for 'lxc.raw'. lxc config set x1 raw.lxc "lxc.aa_allow_incomplete=1" The lxc configuration for lxd containers is auto-generated on each container start, as is the apparmor policy. The contents of the 'lxc.raw' config item are appended to the auto-generated config. Quoting Tomasz Chmielewski (man...@wpkg.org): I get the following when starting a container with lxd: Incomplete AppArmor support in your kernel If you really want to start this container, set lxc.aa_allow_incomplete = 1 in your container configuration file Where exactly do I set this with lxd? I don't really see a "config" file, like with lxc. Is it "metadata.yaml"? If so - how to set it there? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?
On 2015-10-27 23:36, Serge Hallyn wrote: Same "lxc config set containername", i.e.: lxc config set x1 raw.lxc "lxc.network.ipv4 = 10.0.12.2/24" lxc config set x1 raw.lxc "lxc.network.ipv4.gateway = 10.0.12.1" lxc config set x1 raw.lxc "lxc.network.ipv6 = :::::55" lxc config set x1 raw.lxc "lxc.network.ipv6.gateway = :2345:6789:::2" Or is there some other, more recommended way? Can you show fully what you want to do? Are you aiming for a routed config, or are you moidfying a bridged nic device? The above is mixed: routed IPv4 and bridged IPv6 config. But it doesn't matter if it's bridged or routed - all I want to do is: - to set static IPv4 and IPv6 addresses, without doing so in the container (works with lxc), - be sure lxd does not hang if I supply something incompatible in CLI :) Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?
On 2015-10-27 23:36, Serge Hallyn wrote: Quoting Tomasz Chmielewski (man...@wpkg.org): Thanks, it worked. How do I set other "lxc-style" values in lxd, like for example: lxc.network.ipv4 = 10.0.12.2/24 lxc.network.ipv4.gateway = 10.0.12.1 lxc.network.ipv6 = :::::55 lxc.network.ipv6.gateway = :2345:6789:::2 You need to set a single lxc.raw to the whole multi-line value. Hmm, what do you mean by that? Can you give an example? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] set "lxc.aa_allow_incomplete = 1" - where do I add it for lxd?
On 2015-10-27 23:54, Serge Hallyn wrote: (...) But it doesn't matter if it's bridged or routed - all I want to do is: - to set static IPv4 and IPv6 addresses, without doing so in the container (works with lxc), - be sure lxd does not hang if I supply something incompatible in CLI :) Yeah, that one is bad! Can you open an issue for that? Added: https://github.com/lxc/lxd/issues/1246 Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] iptables-save not working in unprivileged containers?
For some, reason, iptables-save does not seem to be working in unprivileged containers. To reproduce: - this adds a sample iptables rule: # iptables -A INPUT -p tcp --dport 22 -j ACCEPT - this lists the rule: # iptables -L -v -n Chain INPUT (policy ACCEPT 13166 packets, 5194K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0tcp dpt:22 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 12620 packets, 656K bytes) pkts bytes target prot opt in out source destination - this is supposed to dump iptables rules to stdout - but it doesn't: # iptables-save # Any idea how to make "iptables-save" working in unprivileged lxc containers? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] iptables-save not working in unprivileged containers?
On 2015-11-10 01:22, Fiedler Roman wrote: # iptables -A INPUT -p tcp --dport 22 -j ACCEPT Yes, also here. Compare iptables-save with iptables-save -t filter Later should work. I think, that some special tables cannot be read in unpiv (mangle perhaps). It seems to behave just like "iptables-save" executed by non-root user (in non-container). Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] [BUG] lxc-destroy destroying wrong containers
lxc-destroy may be destroying wrong containers! To reproduce: 1) have a container you want to clone - here, testvm012d: # lxc-ls -f NAMESTATEIPV4 IPV6 GROUPS AUTOSTART --- testvm012d STOPPED - - - NO 2) clone it - but before the command returns, press ctrl+c (say, you realized you used a wrong name and want to interrupt): # lxc-clone -B dir testvm012d testvm13d [ctrl+c] 3) lxc-ls will now show two containers: # lxc-ls -f NAMESTATEIPV4 IPV6 GROUPS AUTOSTART --- testvm012d STOPPED - - - NO testvm13d STOPPED - - - NO 4) we can see that the "interrupted" container was not fully copied - let's remove it then with lxc-destroy: # du -sh testvm012d testvm13d 462Mtestvm012d 11M testvm13d # lxc-destroy -n testvm13d # echo $? 0 5) as expected, lxc-ls only lists the original container now: # lxc-ls -f NAMESTATEIPV4 IPV6 GROUPS AUTOSTART --- testvm012d STOPPED - - - NO 6) unfortunately rootfs for the original container is gone: # du -sh testvm012d 4.0Ktestvm012d # ls testvm012d/ config If it matters, my containers are in /srv/lxc/, symlinked from /var/lib/lxc/ Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] [BUG] lxc-destroy destroying wrong containers
On 2015-11-10 20:29, Christian Brauner wrote: This may not have something to do with lxc-destroy but with how clones work. Can you only proceed up to step 2) you listed: > 2) clone it - but before the command returns, press ctrl+c (say, you > realized you used a wrong name and want to interrupt): > > # lxc-clone -B dir testvm012d testvm13d > [ctrl+c] and immediately afterwards check whether the rootfs of the original container testvm012d is still present? Step 4 shows the original container is still intact: # du -sh testvm012d testvm13d 462Mtestvm012d 11M testvm13d So it must be lxc-destroy. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] [BUG] lxc-destroy destroying wrong containers
On 2015-11-10 22:47, Christian Brauner wrote: Yes, it is lxc-destroy but lxc-destroy does it exactly what it is expected to do. The cause is the incomplete clone: When you clone a container config of the original container gets copied. After the clone (copying the storage etc.) succeeds the config is updated. That means before the config is updated the config of your clone still contains the rootfs path to the original container. You can verify this by doing: # lxc-clone -B dir testvm012d testvm13d [ctrl+c] and checking YOUR-FAVOURITE editor testvm13d/config it should still contain lxc.rootfs = /path/to/testvm012d/rootfs in contrast to when the copy of the rootfs of the original container succeeds. Then it will contain: lxc.rootfs = /path/to/testvm13d/rootfs (lxc-devel might be a good place to determine whether this is a bug or not.) Looks like lxc-clone should copy the config file at the very end, after rootfs. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] [BUG] lxc-destroy destroying wrong containers
On 2015-11-11 07:28, Serge Hallyn wrote: Hi, as I think was mentioned elsewhere I suspect this is a bug in the clone code. Could you open a github issue at github.com/lxc/lxc/issues and assign it to me? Added: https://github.com/lxc/lxc/issues/694 -- Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc snapshot ... --stateful - "read-only file system"
I'm trying to do a stateful snapshot - unfortunately, it fails: # lxc snapshot odoo08 "test" --stateful error: mkdir /var/lib/lxd/snapshots/odoo08/test/state: read-only file system Does anyone know why? The following log is created: lxc 1452328231.622 INFO lxc_confile - confile.c:config_idmap:1437 - read uid map: type u nsid 0 hostid 10 range 65536 lxc 1452328231.622 INFO lxc_confile - confile.c:config_idmap:1437 - read uid map: type g nsid 0 hostid 10 range 65536 lxc 1452328231.626 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 - do_cgm_get exited with error Running: ii liblxc1 1.1.5-0ubuntu3~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (library) ii lxc 1.1.5-0ubuntu3~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools ii lxc-templates 1.1.5-0ubuntu3~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (templates) ii lxcfs 0.15-0ubuntu2~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii lxd 0.26-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 0.26-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii python3-lxc 1.1.5-0ubuntu3~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (Python 3.x bindings) # uname -a Linux srv7 4.3.3-040303-generic #201512150130 SMP Tue Dec 15 06:32:30 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Tomasz Chmielewski http://wpkg.org/ ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc snapshot ... --stateful - "read-only file system"
Yes, this is on btrfs. On January 9, 2016 9:46:22 PM GMT+09:00, Tycho Andersen wrote: >On Sat, Jan 09, 2016 at 06:58:16PM +0900, Tomasz Chmielewski wrote: >> I'm trying to do a stateful snapshot - unfortunately, it fails: >> >> # lxc snapshot odoo08 "test" --stateful >> error: mkdir /var/lib/lxd/snapshots/odoo08/test/state: read-only file >system > >Looks like https://github.com/lxc/lxd/issues/1485, are you on btrfs as >well? > >Tycho >___ >lxc-users mailing list >lxc-users@lists.linuxcontainers.org >http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] can't remove snapshots with certain names
I'm not able to remove snapshots with certain names. This works as expected: # lxc snapshot odoo10 "2016-01-10" # lxc delete odoo10/2016-01-10 This also works as expected: # lxc snapshot odoo10 "2016-01-10 test number 1" # lxc delete odoo10/2016-01-10\ test\ number\ 1 # lxc snapshot odoo10 "2016-01-10 test number 2" # lxc delete odoo10/"2016-01-10 test number 2" # lxc snapshot odoo10 "2016-01-10 test number 3" # lxc delete "odoo10/2016-01-10 test number 3" This doesn't seem to work (with ":" in snapshot name): # lxc snapshot odoo10 "2016-01-10 23:26" # lxc delete "odoo10/2016-01-10 23:26" error: unknown remote name: "odoo10/2016-01-10 23" # lxc delete "odoo10/2016-01-10 23\:26" error: unknown remote name: "odoo10/2016-01-10 23\\" Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd: restore snapshot as a new container?
Can lxc restore a snapshot as a new container? Let's say I have a container named "container1" and make a snapshot called "test1": # lxc snapshot container1 "test1" How would I restore it as a new container, called "container1-test"? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd - autostart unreliable on busy servers
When I restart a busy server (running several containers, creating 100% IO load for about 10 mins after start), my lxd containers do not autostart reliably. If I start them manually later on, they start fine (although "lxc start containername" needs a while to return). Is there a way to make lxd autostart more reliable? Perhaps it's some kind of timeout which needs to be increased somewhere? In the log, I can see: lxc 1453630080.796 ERRORlxc_cgmanager - cgmanager.c:cgm_dbus_connect:176 - Error cgroup manager api version: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. lxc 1453630080.796 ERRORlxc_cgmanager - cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager lxc 1453630080.797 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 1453630080.799 DEBUGlxc_cgmanager - cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /sys/fs/cgroup/cgmanager/sock: Connection refused lxc 1453630080.799 ERRORlxc_cgmanager - cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager lxc 1453630080.799 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 1453630081.096 DEBUGlxc_cgmanager - cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /sys/fs/cgroup/cgmanager/sock: Connection refused lxc 1453630081.097 ERRORlxc_cgmanager - cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager lxc 1453630081.097 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error lxc 1453630085.958 INFO lxc_confile - confile.c:config_idmap:1437 - read uid map: type u nsid 0 hostid 10 range 65536 lxc 1453630085.958 INFO lxc_confile - confile.c:config_idmap:1437 - read uid map: type g nsid 0 hostid 10 range 65536 lxc 1453630085.960 DEBUGlxc_cgmanager - cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /sys/fs/cgroup/cgmanager/sock: Connection refused lxc 1453630085.960 ERRORlxc_cgmanager - cgmanager.c:do_cgm_get:872 - Error connecting to cgroup manager lxc 1453630085.961 WARN lxc_cgmanager - cgmanager.c:cgm_get:989 - do_cgm_get exited with error Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd: restore snapshot as a new container?
On 2016-01-20 02:04, Serge Hallyn wrote: Quoting Tomasz Chmielewski (man...@wpkg.org): Can lxc restore a snapshot as a new container? Let's say I have a container named "container1" and make a snapshot called "test1": # lxc snapshot container1 "test1" How would I restore it as a new container, called "container1-test"? lxc copy container1/test1 container1-test1 If using a filesystem which allows snapshotting (btrfs) - will it copy container's directory (uses lots of space, takes long), or snapshot it (almost instant, uses almost no extra space)? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd: restore snapshot as a new container?
On 2016-01-25 22:19, Tomasz Chmielewski wrote: Let's say I have a container named "container1" and make a snapshot called "test1": # lxc snapshot container1 "test1" How would I restore it as a new container, called "container1-test"? lxc copy container1/test1 container1-test1 If using a filesystem which allows snapshotting (btrfs) - will it copy container's directory (uses lots of space, takes long), or snapshot it (almost instant, uses almost no extra space)? It seems to be doing a proper snapshot - good :) Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc file "only allowed for containers that are currently running"?
According to fine manual[1], lxc file "is only allowed for containers that are currently running". I've tried doing both push and pull operations on a container in STOPPED state, and it worked, i.e.: lxc file pull stopped-container/etc/services . lxc file push services stopped-container/etc/services So either documentation is outdated, and lxc push/pull is allowed for containers in any state (or at least RUNNING and STOPPED) or the functionality will be removed. Which one is true? Being able to push/pull the files is quite convenient. I'm using: lxd-client 0.27-0ubuntu2~ubuntu14.04.1~ppa1 amd64 [1] https://github.com/lxc/lxd/blob/master/specs/command-line-user-experience.md#file Tomasz Chmielewski http://wpkg.org/ ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc file "only allowed for containers that are currently running"?
On 2016-01-26 01:46, Stéphane Graber wrote: So either documentation is outdated, and lxc push/pull is allowed for containers in any state (or at least RUNNING and STOPPED) or the functionality will be removed. Which one is true? Being able to push/pull the files is quite convenient. I changed file pull/push a little while ago to work against stopped containers too, clearly I forgot to update the documentation :) Excellent! A pull request would be appreciated, otherwise I'll try to remember to fix this next time I look at the specs. I would if I knew how! Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] "termination protection"?
Is there a way to protect the containers against accidental termination? For example: # lxc list | container-2016-01-25-17-20-11 | RUNNING | 10.190.0.50 (eth0) (...) # lxc delete container-2016-01-25-17-20-11 No longer there! Some kind of "lxc config set containername allowdelete=0" would be very useful: - "s" is next to "d" on the keyboard, so it's easy to delete the container with: lxc d-press-tab containername - it would feel safer to protect important containers this way - probably "lxc config set containername allowdelete=0" should not protect snapshots, if named explicitely, i.e. "lxc delete containername/snapshot" Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd-client: frequent "Unable to connect to"
I'm using lxd-client in a lxd container. Unfortunately, very frequently I'm getting "Unable to connect to" (after several seconds of "hanging"), for example: # lxc list error: Get https://10.0.3.1:8443/1.0/containers?recursion=1: Unable to connect to: 10.0.3.1:8443 I've checked with tcpdump on the host, and the packets are going both ways. At the same time when it happens (lxc list hangs, or any lxc commands hang), "curl -k -v https://10.0.3.1:8443"; just works - i.e. returns: {"type":"sync","status":"Success","status_code":200,"metadata":["/1.0"],"operation":""} Do you have any ideas to debug this? The same executed directly on the host always work fine. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd-client: frequent "Unable to connect to"
Entropy seems fine. It looks as if communication in the bridge stops working sometimes, probably when there is no traffic in it for longer periods of time. Tomasz Chmielewski http://wpkg.org On 2016-01-26 20:57, Jäkel wrote: Dear Tomasz, What's the output of watch -n 1 cat /proc/sys/kernel/random/entropy_avail around blocking or hanging times? -Original Message- From: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] On Behalf Of Tomasz Chmielewski Sent: Tuesday, January 26, 2016 10:58 AM To: LXC users mailing-list Subject: [lxc-users] lxd-client: frequent "Unable to connect to" I'm using lxd-client in a lxd container. Unfortunately, very frequently I'm getting "Unable to connect to" (after several seconds of "hanging"), for example: # lxc list error: Get https://10.0.3.1:8443/1.0/containers?recursion=1: Unable to connect to: 10.0.3.1:8443 I've checked with tcpdump on the host, and the packets are going both ways. At the same time when it happens (lxc list hangs, or any lxc commands hang), "curl -k -v https://10.0.3.1:8443"; just works - i.e. returns: {"type":"sync","status":"Success","status_code":200,"metadata":["/1.0"],"operation":""} Do you have any ideas to debug this? The same executed directly on the host always work fine. ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] "lxc file push" corrupts files
In some cases, "lxc file push" corrupts files. To reproduce: - file must exist in the container - existing file in the container must be bigger than the file being pushed Reproducer: * make sure /tmp/testfile does not exist in the container: host# lxc exec container rm /tmp/testfile * create two files locally: host# echo a > smaller host# echo ab > bigger host# md5sum smaller bigger 60b725f10c9c85c70d97880dfe8191b3 smaller daa8075d6ac5ff8d0c6d4650adb4ef29 bigger host# ls -l smaller bigger -rw-r--r-- 1 root root 3 Jan 27 07:35 bigger -rw-r--r-- 1 root root 2 Jan 27 07:35 smaller * send the bigger file: host# lxc file push bigger container/tmp/testfile * check if it's OK - it is: container# md5sum testfile daa8075d6ac5ff8d0c6d4650adb4ef29 testfile container# ls -l testfile -rw-r--r-- 1 root root 3 Jan 27 07:37 testfile * now, send the smaller file to overwrite "testfile" host# lxc file push smaller container/tmp/testfile * it is neither of the files we've sent: container# md5sum testfile 94364860a0452ac23f3dac45f0091d81 testfile container# ls -l testfile -rw-r--r-- 1 root root 3 Jan 27 07:38 testfile container# cat testfile a <-- extra end of line <-- extra end of line container# Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] "lxd file pull" leaves temporary files in /tmp
"lxd file pull" leaves temporary files in /tmp. To reproduce: * remove any old "temporary files" created by "lxc file pull": # rm /tmp/lxd_forkgetfile_* * pull some files (can be the same file, too) - let's say we repeat it 3 times: # lxc file pull some-container/etc/services . # lxc file pull some-container/etc/services . # lxc file pull some-container/etc/services . * see that there are 3 "lxd_forkgetfile_*" files in /tmp # ls /tmp/lxd* /tmp/lxd_forkgetfile_149745082 /tmp/lxd_forkgetfile_171296584 /tmp/lxd_forkgetfile_521794854 Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] container name limit includes snapshot names? unable to create snapshots
It's not possible to create snapshots for containers with long names. To reproduce: Container name seems to be limited to 63 characters: * trying to create a container with 64 characters fails - that's OK, here, we just wanted to know what the limit for a container name is: # lxc copy base aabbccddeeff error: Container name isn't a valid hostname. * trying to create a container with less characters (63, 62 and 61) succeeds: # lxc copy base-uni-web01 aabbccddeeffggg # lxc copy base-uni-web01 aabbccddeeffgg # lxc copy base-uni-web01 aabbccddeeffg * Now, let's try to create a snapshot - container name with 63 characters: # lxc snapshot aabbccddeeffggg snapname error: Failed to set LXC config: lxc.utsname=aabbccddeeffggg/snapname # lxc snapshot aabbccddeeffggg snap error: Failed to set LXC config: lxc.utsname=aabbccddeeffggg/snap # lxc snapshot aabbccddeeffggg s error: Failed to set LXC config: lxc.utsname=aabbccddeeffggg/s * Trying to create a snapshot for a container with 62 character name - it's only possible to set one character snapshot name: # lxc snapshot aabbccddeeffgg snapshot error: Failed to set LXC config: lxc.utsname=aabbccddeeffgg/snapshot # lxc snapshot aabbccddeeffgg snap error: Failed to set LXC config: lxc.utsname=aabbccddeeffgg/snap # lxc snapshot aabbccddeeffgg sn error: Failed to set LXC config: lxc.utsname=aabbccddeeffgg/sn # lxc snapshot aabbccddeeffgg s # Feature or bug? In my opinion a bug, counter intuitive, makes scripting hard. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc exec - is there a way to run command in the background?
Is there a way to run the command in the background, when running "lxc exec"? It doesn't seem to work for me. # lxc exec container -- sleep 2h & [2] 13566 # [2]+ Stopped lxc exec container -- sleep 2h This also doesn't work: # lxc exec container -- "sleep 2h &" # Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] "lxc stop" hangs?
I have a problem with "lxc stop" command which stops the container, but never returns. It used to work some time ago (not sure when exactly it stopped working). # dpkg -l|grep lxd ii lxd 2.0.0~beta1-0ubuntu4~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~beta1-0ubuntu4~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client From strace, I can see that the last GET request, but after that, it just hangs (with container correctly stopped). Is it a known problem? [pid 16696] <... read resumed> "HTTP/1.1 301 Moved Permanently\r\nLocation: /1.0/operations/9db9ffdb-9a7c-46f6-a300-9ac61074f5a2/wait\r\nDate: Tue, 09 Feb 2016 03:45:36 GMT\r\nContent-Length: 0\r\nContent-Type: text/plain; charset=utf-8\r\n\r\n", 4096) = 200 [pid 16693] futex(0xd88d90, FUTEX_WAKE, 1) = 0 [pid 16693] select(0, NULL, NULL, NULL, {0, 20} [pid 16696] futex(0xc82002e908, FUTEX_WAKE, 1) = 1 [pid 16694] <... futex resumed> ) = 0 [pid 16696] read(3, [pid 16694] epoll_wait(4, [pid 16696] <... read resumed> 0xc8200a7000, 4096) = -1 EAGAIN (Resource temporarily unavailable) [pid 16694] <... epoll_wait resumed> {}, 128, 0) = 0 [pid 16693] <... select resumed> ) = 0 (Timeout) [pid 16694] epoll_wait(4, [pid 16693] select(0, NULL, NULL, NULL, {0, 20} [pid 16696] futex(0xd89a08, FUTEX_WAKE, 1 [pid 16692] <... futex resumed> ) = 0 [pid 16696] <... futex resumed> ) = 1 [pid 16692] select(0, NULL, NULL, NULL, {0, 100} [pid 16693] <... select resumed> ) = 0 (Timeout) [pid 16693] select(0, NULL, NULL, NULL, {0, 20} [pid 16696] write(3, "GET /1.0/operations/9db9ffdb-9a7c-46f6-a300-9ac61074f5a2/wait HTTP/1.1\r\nHost: unix.socket\r\nUser-Agent: Go-http-client/1.1\r\nReferer: http://unix.socket//1.0/operations/9db9ffdb-9a7c-46f6-a300-9ac61074f5a2/wait\r\nAccept-Encoding: gzip\r\n\r\n", 235 [pid 16692] <... select resumed> ) = 0 (Timeout) [pid 16696] <... write resumed> ) = 235 [pid 16692] futex(0xd89a08, FUTEX_WAIT, 0, NULL [pid 16696] futex(0xd89a08, FUTEX_WAKE, 1 [pid 16692] <... futex resumed> ) = -1 EAGAIN (Resource temporarily unavailable) [pid 16696] <... futex resumed> ) = 0 [pid 16692] select(0, NULL, NULL, NULL, {0, 100} [pid 16696] futex(0xc820062108, FUTEX_WAIT, 0, NULL [pid 16694] <... epoll_wait resumed> {{EPOLLOUT, {u32=1170309096, u64=139643442003944}}}, 128, -1) = 1 [pid 16693] <... select resumed> ) = 0 (Timeout) [pid 16694] epoll_wait(4, [pid 16693] select(0, NULL, NULL, NULL, {0, 20} [pid 16692] <... select resumed> ) = 0 (Timeout) [pid 16693] <... select resumed> ) = 0 (Timeout) [pid 16692] futex(0xd89a08, FUTEX_WAIT, 0, NULL [pid 16693] futex(0xd88e50, FUTEX_WAIT, 0, {60, 0}) = -1 ETIMEDOUT (Connection timed out) [pid 16693] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout) [pid 16693] futex(0xd88e50, FUTEX_WAIT, 0, {60, 0}) = -1 ETIMEDOUT (Connection timed out) [pid 16693] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout) [pid 16693] futex(0xd88e50, FUTEX_WAIT, 0, {60, 0} Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] all lxc containers gone after update?
Looks all my lxc containers are gone after some recent update: # lxc-ls -f (no output at all) # lxc-attach -n somecontainer Error: container somecontainer is not defined Is there a way to fix? # dpkg -l|grep lxc ii liblxc1 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (library) ii lxc 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 all Transitional package for lxc1 ii lxc-common 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (common tools) ii lxc-templates 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (templates) ii lxc1 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools ii lxcfs 2.0.0~beta2-0ubuntu1~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii python3-lxc 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (Python 3.x bindings) Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] all lxc containers gone after update?
On 2016-02-25 11:15, Tomasz Chmielewski wrote: Looks all my lxc containers are gone after some recent update: # lxc-ls -f (no output at all) # lxc-attach -n somecontainer Error: container somecontainer is not defined FYI, my /var/lib/lxc was a symbolic link to /some/other/mountpoint. Looks like the upgrade removed that symbolic link and created an empty /var/lib/lxc directory. Restoring the symbolic link "fixed" this. Is it expected that /var/lib/lxc is removed when lxc is updated? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] no lxc nor lxd containers start after recent update
None of my lxc nor lxd container start after the most recent lxc/lxd update. Any clues? # lxc-start -n td-backupslave lxc-start: lxc_start.c: main: 344 The container failed to start. lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode. lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options. # cat /var/log/lxc/td-backupslave.log lxc-start 20160226000408.599 ERRORlxc_utils - utils.c:open_without_symlink:1625 - No such file or directory - Error examining efi in /usr/lib/x86_64-linux-gnu/lxc/sys/firmware/efi/efivars lxc-start 20160226000408.599 ERRORlxc_cgfs - cgfs.c:cgroupfs_mount_cgroup:1504 - Permission denied - error bind-mounting /run/lxcfs/controllers/name=systemd/lxc/td-backupslave to /usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/lxc/td-backupslave lxc-start 20160226000408.599 ERRORlxc_conf - conf.c:lxc_mount_auto_mounts:780 - Permission denied - error mounting /sys/fs/cgroup lxc-start 20160226000408.599 ERRORlxc_conf - conf.c:lxc_setup:3742 - failed to setup the automatic mounts for 'td-backupslave' lxc-start 20160226000408.599 ERRORlxc_start - start.c:do_start:783 - failed to setup the container lxc-start 20160226000408.599 ERRORlxc_sync - sync.c:__sync_wait:51 - invalid sequence number 1. expected 2 lxc-start 20160226000408.599 ERRORlxc_start - start.c:__lxc_start:1274 - failed to spawn 'td-backupslave' lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:344 - The container failed to start. lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:346 - To get more details, run the container in foreground mode. lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options. # dpkg -l|grep lxc ii liblxc1 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (library) ii lxc 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 all Transitional package for lxc1 ii lxc-common 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (common tools) ii lxc-templates 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (templates) ii lxc1 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools ii lxcfs 2.0.0~rc2-0ubuntu1~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii python3-lxc 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (Python 3.x bindings) # dpkg -l|grep lxd ii lxd 2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] no lxc nor lxd containers start after recent update
Doing the following seems to help: # service lxcfs stop # service lxcfs start Then, I'm able to manually start lxc and lxd containers. Tomasz On 2016-02-26 09:04, Tomasz Chmielewski wrote: None of my lxc nor lxd container start after the most recent lxc/lxd update. Any clues? # lxc-start -n td-backupslave lxc-start: lxc_start.c: main: 344 The container failed to start. lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode. lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options. # cat /var/log/lxc/td-backupslave.log lxc-start 20160226000408.599 ERRORlxc_utils - utils.c:open_without_symlink:1625 - No such file or directory - Error examining efi in /usr/lib/x86_64-linux-gnu/lxc/sys/firmware/efi/efivars lxc-start 20160226000408.599 ERRORlxc_cgfs - cgfs.c:cgroupfs_mount_cgroup:1504 - Permission denied - error bind-mounting /run/lxcfs/controllers/name=systemd/lxc/td-backupslave to /usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/lxc/td-backupslave lxc-start 20160226000408.599 ERRORlxc_conf - conf.c:lxc_mount_auto_mounts:780 - Permission denied - error mounting /sys/fs/cgroup lxc-start 20160226000408.599 ERRORlxc_conf - conf.c:lxc_setup:3742 - failed to setup the automatic mounts for 'td-backupslave' lxc-start 20160226000408.599 ERRORlxc_start - start.c:do_start:783 - failed to setup the container lxc-start 20160226000408.599 ERRORlxc_sync - sync.c:__sync_wait:51 - invalid sequence number 1. expected 2 lxc-start 20160226000408.599 ERRORlxc_start - start.c:__lxc_start:1274 - failed to spawn 'td-backupslave' lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:344 - The container failed to start. lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:346 - To get more details, run the container in foreground mode. lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options. # dpkg -l|grep lxc ii liblxc1 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (library) ii lxc 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 all Transitional package for lxc1 ii lxc-common 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (common tools) ii lxc-templates 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (templates) ii lxc1 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools ii lxcfs 2.0.0~rc2-0ubuntu1~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii python3-lxc 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (Python 3.x bindings) # dpkg -l|grep lxd ii lxd 2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~beta4-0ubuntu5~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] no lxc nor lxd containers start after recent update
Also, with the "service lxcfs stop/start fix", console in lxd containers (in lxc, all fine) looks botched - after pressing enter, getting as below. I guess it wasn't the best of updates, at least for me. # lxc exec z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55 /bin/bash root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# root@z-testing-a19ea622182c63ddc19bb22cde982b82-2016-02-24-19-20-55:~# Tomasz On 2016-02-26 09:16, Tomasz Chmielewski wrote: Doing the following seems to help: # service lxcfs stop # service lxcfs start Then, I'm able to manually start lxc and lxd containers. Tomasz On 2016-02-26 09:04, Tomasz Chmielewski wrote: None of my lxc nor lxd container start after the most recent lxc/lxd update. Any clues? # lxc-start -n td-backupslave lxc-start: lxc_start.c: main: 344 The container failed to start. lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode. lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options. # cat /var/log/lxc/td-backupslave.log lxc-start 20160226000408.599 ERRORlxc_utils - utils.c:open_without_symlink:1625 - No such file or directory - Error examining efi in /usr/lib/x86_64-linux-gnu/lxc/sys/firmware/efi/efivars lxc-start 20160226000408.599 ERRORlxc_cgfs - cgfs.c:cgroupfs_mount_cgroup:1504 - Permission denied - error bind-mounting /run/lxcfs/controllers/name=systemd/lxc/td-backupslave to /usr/lib/x86_64-linux-gnu/lxc/sys/fs/cgroup/systemd/lxc/td-backupslave lxc-start 20160226000408.599 ERRORlxc_conf - conf.c:lxc_mount_auto_mounts:780 - Permission denied - error mounting /sys/fs/cgroup lxc-start 20160226000408.599 ERRORlxc_conf - conf.c:lxc_setup:3742 - failed to setup the automatic mounts for 'td-backupslave' lxc-start 20160226000408.599 ERRORlxc_start - start.c:do_start:783 - failed to setup the container lxc-start 20160226000408.599 ERRORlxc_sync - sync.c:__sync_wait:51 - invalid sequence number 1. expected 2 lxc-start 20160226000408.599 ERRORlxc_start - start.c:__lxc_start:1274 - failed to spawn 'td-backupslave' lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:344 - The container failed to start. lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:346 - To get more details, run the container in foreground mode. lxc-start 20160226000414.138 ERRORlxc_start_ui - lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options. # dpkg -l|grep lxc ii liblxc1 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (library) ii lxc 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 all Transitional package for lxc1 ii lxc-common 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (common tools) ii lxc-templates 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools (templates) ii lxc1 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 amd64Linux Containers userspace tools ii lxcfs 2.0.0~rc2-0ubuntu1~ubuntu14.04.1~ppa1 amd64FUSE based filesystem for LXC ii python3-l
[lxc-users] lxc stop / lxc reboot stopped working
After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}} DBUG[03-09|08:50:05] Putting {"action":"stop","force":false,"stateful":false,"timeout":-1} to http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"} DBUG[03-09|08:50:05] 1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait Just sits and hangs here. Is there any quick fix for that? Other than that - do you have any system which checks basic functionality before pushing the packages to general public? Seems we had lots of bugs making lxd unusable lately. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc stop / lxc reboot stopped working
Am I the only one affected? Also happens with: ii lxd 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools "lxc restart containername" mostly just hangs. Tomasz On 2016-03-09 17:53, Tomasz Chmielewski wrote: After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}} DBUG[03-09|08:50:05] Putting {"action":"stop","force":false,"stateful":false,"timeout":-1} to http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"} DBUG[03-09|08:50:05] 1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait Just sits and hangs here. Is there any quick fix for that? Other than that - do you have any system which checks basic functionality before pushing the packages to general public? Seems we had lots of bugs making lxd unusable lately. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc stop / lxc reboot stopped working
Something like this reproduces it for me reliably (hangs on the first or second "stop"): while true; do echo stop time lxc stop containername --debug sleep 5 echo start lxc start containername done Tomasz On 2016-03-11 01:35, Tomasz Chmielewski wrote: Am I the only one affected? Also happens with: ii lxd 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools "lxc restart containername" mostly just hangs. Tomasz On 2016-03-09 17:53, Tomasz Chmielewski wrote: After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}} DB
Re: [lxc-users] lxc stop / lxc reboot stopped working
For "lxc restart", this reproduces reliably (below). It seems that there may be some race - if "sleep" is set to lower values, it seems more likely that it will fail. # while true; do echo restart time lxc restart containername sleep 3 done restart real0m15.448s user0m0.048s sys 0m0.000s restart real0m11.373s user0m0.052s sys 0m0.004s restart real0m13.019s user0m0.048s sys 0m0.000s restart real0m6.023s user0m0.040s sys 0m0.008s restart real0m7.106s user0m0.048s sys 0m0.000s restart real0m5.520s user0m0.044s sys 0m0.004s restart real0m49.382s user0m0.052s sys 0m0.000s restart real0m33.426s user0m0.048s sys 0m0.000s restart ...hangs here... Tomasz On 2016-03-11 02:23, Tomasz Chmielewski wrote: Something like this reproduces it for me reliably (hangs on the first or second "stop"): while true; do echo stop time lxc stop containername --debug sleep 5 echo start lxc start containername done Tomasz On 2016-03-11 01:35, Tomasz Chmielewski wrote: Am I the only one affected? Also happens with: ii lxd 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools "lxc restart containername" mostly just hangs. Tomasz On 2016-03-09 17:53, Tomasz Chmielewski wrote: After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\"
[lxc-users] recent lxd update broke lxc exec terminal size?
After a recent lxd update, doing "lxc exec somecontainer /bin/bash" will attach to given container's console, but it's size is very small, less than 1/4 of the screen. It's quite uncomfortable to work with (i.e. ps auxf output is truncated, ncurses-based programs behave erratic). Is it intended change? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] zfs disk usage for published lxd images
I've been using btrfs quite a lot and it's great technology. There are some shortcomings though: 1) compression only really works with compress-force mount argument On a system which only stores text logs (receiving remote rsyslog logs), I was gaining around 10% with compress=zlib mount argument - not that great for text files/logs. With compress-force=zlib, I'm getting over 85% compress ratio (i.e. using just 165 GB of disk space to store 1.2 TB data). Maybe that's the consequence of receiving log streams, not sure (but, compress-force fixed bad compression ratio). 2) the first kernel where I'm not getting out-of-space issues is 4.6 (which was released yesterday). If you're using a distribution kernel, you will probably be seeing out-of-space issues. Quite likely scenario to hit out-of-space with a kernel lower than 4.6 is to use a database (postgresql, mongo etc.) and to snapshot the volume. Ubuntu users can download kernel packages from http://kernel.ubuntu.com/~kernel-ppa/mainline/ 3) had some really bad experiences with btrfs quotas stability in older kernels, and judging by the amount of changes in this area on linux-btrfs mailing list, I'd rather wait a few stable kernels than use it again 4) if you use databases - you should chattr +C database dir, otherwise, performance will suffer. Please remember that chattr +C does not have any effect on existing files, so you might need to stop your database, copy the files out, chattr +C the database dir, copy the files in Other than that - works fine, snapshots are very useful. It's hard to me to say what's "more stable" on Linux (btrfs or zfs); my bets would be btrfs getting more attention in the coming year, as it's getting its remaining bugs fixed. Tomasz Chmielewski http://wpkg.org On 2016-05-16 20:20, Ron Kelley wrote: I tried ZFS on various linux/FreeBSD builds in the past and the performance was aweful. It simply required too much RAM to perform properly. This is why I went the BTRFS route. Maybe I should look at ZFS again on Ubuntu 16.04... On 5/16/2016 6:59 AM, Fajar A. Nugraha wrote: On Mon, May 16, 2016 at 5:38 PM, Ron Kelley wrote: For what's worth, I use BTRFS, and it works great. Btrfs also works in nested lxd, so if that's your primary use I highly recommend btrfs. Of course, you could also keep using zfs-backed containers, but manually assign a zvol-formatted-as-btrfs for first-level-container's /var/lib/lxd. Container copies are almost instant. I can use compression with minimal overhead, zfs and btrfs are almost identical in that aspect (snapshot/clone, and lz4 vs lzop in compression time and ratio). However, lz4 (used in zfs) is MUCH faster at decompression compared to lzop (used in btrfs), while lzop uses less memory. use quotas to limit container disk space, zfs does that too and can schedule a deduplication task via cron to save even more space. That is, indeed, only available in btrfs ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc move fails
Trying to move a running container between two Ubuntu 16.04 servers with the latest updates installed: # lxc move local-container remote: error: Error transferring container data: checkpoint failed: (00.316028) Error (pie-util-vdso.c:155): vdso: Not all dynamic entries are present (00.316211) Error (cr-dump.c:1600): Dumping FAILED. Expected? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] what sets http_proxy?
I've moved an offline container from one server to another. Started it there, and trying to use curl: # curl www.example.com curl: (5) Could not resolve proxy: fe80::1%eth0] It's because this variable is set: # echo $http_proxy http://[fe80::1%eth0]:13128 What sets this and why? I didn't set it on the host (on the host http_proxy is empty) nor in the container. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc move fails
On 2016-06-01 23:24, Stéphane Graber wrote: Expected? I don't believe I've seen that one before. Maybe Tycho has. Live migration is considered an experimental feature of LXD right now, mostly because CRIU still needs quite a bit of work to serialize all useful kernel structures. You may want to follow the "Sending bug reports" section of my post here https://www.stgraber.org/2016/04/25/lxd-2-0-live-migration-912/ That way we should have all the needed data to look into this. FYI I'm using Linux 4.6.0 (from http://kernel.ubuntu.com/~kernel-ppa/mainline/) on both servers, as all earlier kernels are not stable with btrfs. Could be it's not compatible? Do you still want me to report the issue? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc move fails
On 2016-06-02 00:29, Tycho Andersen wrote: Yes, an issue report would be appreciated. Can you include the full log? Reported here: https://bugs.launchpad.net/ubuntu/+source/criu/+bug/1588133 If you need more info, let me know. Looks like your container is using some program with some sort of funny ELF headers that criu doesn't quite understand. This is pretty much standard Ubuntu 16.04. The only running binary which is out of repositories is nginx (from upstream nginx repo). Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc exec / list: x509: certificate has expired or is not yet valid
Not sure what's the procedure for this one: # lxc list error: Get https://10.0.0.1:8443/1.0/containers?recursion=1: x509: certificate has expired or is not yet valid ? Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc exec / list: x509: certificate has expired or is not yet valid
On 2016-06-02 21:09, Tomasz Chmielewski wrote: Not sure what's the procedure for this one: # lxc list error: Get https://10.0.0.1:8443/1.0/containers?recursion=1: x509: certificate has expired or is not yet valid Apparently LXD sets up a certificate with 1 year validity when installed, but provides no mechanism to automatically update it. And can be a big surprise after a year :| Also, don't see the CSR file there? So... what is the correct procedure to update the certificate on LXD server and make sure it's still accepted by LXD clients? # ls /var/lib/lxd/server.* -l -rw-r--r-- 1 root root 1834 Jun 3 2015 /var/lib/lxd/server.crt -rw--- 1 root root 3247 Jun 3 2015 /var/lib/lxd/server.key # openssl x509 -text -noout -in server.crt Certificate: Data: Version: 3 (0x2) Serial Number: 34:f0:eb:8c:3f:76:f0:db:21:01:5d:34:1c:cd:f0:5c Signature Algorithm: sha256WithRSAEncryption Issuer: O=linuxcontainer.org Validity Not Before: Jun 3 06:33:15 2015 GMT Not After : Jun 2 06:33:15 2016 GMT Subject: O=linuxcontainer.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (4096 bit) (...) Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc exec / list: x509: certificate has expired or is not yet valid
On 2016-06-02 22:40, Andrey Repin wrote: So... what is the correct procedure to update the certificate on LXD server and make sure it's still accepted by LXD clients? I would go a long route and set up my own CA. Though, I actually did that already... Alternative is to make yourself a certificate though third-party CA, like Let's Encrypt. Well, it seems that LXD is fine with self-signed certificates as well. Which is OK with me. However, changing a cert with LXD is painful: - needs new server.crt/server.key in /var/lib/lxd, and lxd restart? force-reload? - if any client connected to IP address (and not to domain name), certificate needs to have them as SAN (subject alternative names) - there is no "lxd remote" command to accept a new certificate from the server - so LXD clients have to go through the painful "set up a different default remote (or, set it to local), remove the remote with expired certificate, add the remote with the new certificate, set it as a new default etc. - LXD / lxc command does not alert that the cert is about to expire, so the user finds out when it's too late and the system stops working correctly (think automated starting / removal of containers etc.) - could not find anything about changing the cert in LXD docs, so it was a bit of a problem working out why it doesn't work anymore and how to fix it The whole process could be designed a bit better :) Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Btrfs - Disk quota and Ubuntu 15.10
On 2016-06-30 17:38, Sjoerd wrote: On 29/06/2016 20:02, Benoit GEORGELIN - Association Web4all wrote: Hi, (without hijacking another thread) I'm sharing with you some information about BTRFS and Ubuntu 15.10, Kernel 4.2.0-30-generic regarding a quota disk error on my LXC containers If you plan tu use quota, this will be interesting for you to know. Yes I just read it on the btrfs-mailinglist ;) Anway besides the problems you describe, using quota also brings down btrfs send/receive speed to a crawl. I am backing up my containers with btrfs send/receive and because of all the quota problems described I am not using it anymore. Please note that btrfs is not a stable filesystem, at least not in the latest Ubuntu (16.04). You may have "out of space" errors with them, especially when doing snapshots. kernels 4.6.x[1] behave stable for me. [1] working with Ubuntu, from http://kernel.ubuntu.com/~kernel-ppa/mainline/ Tomasz Chmielewski http://wpkg.org/ ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Btrfs - Disk quota and Ubuntu 15.10
On 2016-06-30 18:55, Sjoerd wrote: On 30/06/2016 11:17, Tomasz Chmielewski wrote: Please note that btrfs is not a stable filesystem, at least not in the latest Ubuntu (16.04). You may have "out of space" errors with them, especially when doing snapshots. kernels 4.6.x[1] behave stable for me. I am not using RAID5/6 with btrfs. Only the latter is still not production ready as I understood it. My amount of snapshots won't be a lot (maybe 50 max or so), since I delete them regurly. But I'll keep an eye on the metadata as well indeed.thanks for the hint.. "out of space" when doing snapshot affects kernels older than 4.6, no matter if you use RAID-1, RAID-5/6, or no RAID. It's especially annoying especially when snapshotting running containers with postgres, mysql, mongo etc. - as this causes database errors or crashes. Tomasz Chmielewski http://wpkg.org/ ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] LAN for LXD containers (with multiple LXD servers)?
It's easy to create a "LAN" for LXD containers on a single LXD server - just attach them to the same bridge, use the same subnet (i.e. 10.10.10.0/24) - done. Containers can communicate with each other using their private IP address. However, with more then one LXD server *not* in the same LAN (i.e. two LXD servers in different datacentres), the things get tricky. Is anyone using such setups, with multiple LXD servers and containers being able to communicate with each other? LXD1: IP 1.2.3.4, EuropeLXD2: IP 2.3.4.5, Asia container1, 10.10.10.10 container4, 10.10.10.20 container2, 10.10.10.11 container5, 10.10.10.21 container3, 10.10.10.12 container6, 10.10.10.22 LXD3: IP 3.4.5.6, US container7, 10.10.10.30 container8, 10.10.10.31 container8, 10.10.10.32 While I can imagine setting up many OpenVPN tunnels between all LXD servers (LXD1-LXD2, LXD1-LXD3, LXD2-LXD3) and constantly adjusting the routes as containers are stopped/started/migrated, it's a bit of a management nightmare. And even more so if the number of LXD servers grows. Hints, discussion? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?
On 2016-09-18 21:05, Sergiusz Pawlowicz wrote: On Sun, Sep 18, 2016 at 4:16 PM, Tomasz Chmielewski wrote: While I can imagine setting up many OpenVPN tunnels between all LXD servers I cannot imagine that :-) :-) Use tinc, mate. Your life begins :-) https://www.tinc-vpn.org/ I did some reading about tinc before, and according to documentation and mailing lists: - performance may not be so great - it gets problematic as the number of tinc instances grows (few will be OK, dozens will work, but beyond that, the things might get slowish) - if I'm not mistaken, you need to run a tinc instance per LXD client, not per LXD server, so that's extra management and performance overhead (i.e. if two tinc clients are running on the same server, they would still encrypt the traffic to each other) Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?
On 2016-09-18 22:14, Ron Kelley wrote: (Long reply follows…) Personally, I think you need to look at the big picture for such deployments. From what I read below, you are asking, “how do I extend my layer-2 subnets between data centers such that container1 in Europe can talk with container6 in Asia, etc”. If this is true, I think you need to look at deploying data center hardware (servers with multiple NICs, IPMI/DRAC/iLO interfaces) with proper L2/L3 routing (L2TP/IPSEC, etc). And, you must look at how your failover services will work in this design. It’s easy to get a couple of servers working with a simple design, but those simple designs tend to go to production very fast without proper testing and design. Well, it's not only about deploying on "different continents". It can be also in the same datacentre, where the hosting doesn't give you a LAN option. For example - Amazon AWS, same region, same availability zone. The servers will have "private" addresses like 10.x.x.x, traffic there will be private to your servers, but there will be no LAN. You can't assign your own LAN addresses (10.x.x.x). This means, while you can launch several LXD containers on every of these servers - but their "LAN" will be limited per each LXD server (unless we do some special tricks). Some other hostings offer a public IP, or several public IPs per servers, in the same datacentre, but again, no LAN. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] LAN for LXD containers (with multiple LXD servers)?
On 2016-09-19 05:12, Tilak Waelde wrote: Hope this helps. Happy to share my LXD configurations with anyone... Please do! I'd really love to see a description of a production lxd / lxc setup with proper networking and multiple hosts! I haven't played around with it yet, but is it possible to include some sort of VRF-lite[0] into such a setup for multi tenancy purposes? Other than by using VLANs one could use the same IP ranges multiple times from what I've come to understand? I'm not sure how a user could put the containers interfaces into a different network namespace.. Hi, after some experimenting with VXLAN, I've summed up a working "LAN for multiple LXC servers" here: https://lxadm.com/Unicast_VXLAN:_overlay_network_for_multiple_servers_with_dozens_of_containers It is using in kernel VXLAN, and thus performs very well (almost wire speed, and much much better than any userspace programs). On the other hand, it provides no encryption between LXD servers (or, in fact, any other virtualization), so may depend on your exact requirements. Tomasz Chmielewski ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Container free memory vs host and OOM errors
On 2016-09-27 11:24, Mathias Gibbens wrote: Hi, Recently I've been setting up unprivileged LXC containers on an older server that has 6GB of physical RAM. As my containers are running, occasionally I am seeing OOM errors in the host's syslog when the kernel This system is running Debian stretch (currently the "testing" distribution), 64bit kernel 4.7.4-grsec Might be 4.7.x kernel. There was some OOM regression in 4.7.x, but I'm not sure if it was fixed in 4.7.4 or not. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Will there be an extra section added to the LXD 2.0 blog post series for the new Networking capabilities?
On 2016-09-29 12:03, Stéphane Graber wrote: On Wed, Sep 28, 2016 at 10:56:48PM -0400, brian mullan wrote: The current 12 part blog post series is really helpful & informative: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/ But all the newly announced LXD 2.3 networking features <https://linuxcontainers.org/lxd/news/> are pretty exciting and it would be great to see a chapter on that included in that Blog Post Series too in order to jump start folks on how to use all of the new capabilities. It's not going to be part of the 2.0 series since it's not in LXD 2.0, but I'll likely be posting something about the new network stuff in the next few weeks. "cross-host tunnels with GRE or VXLAN" Interesting! Will it be limited to 2 LXD servers only, or will it allow an arbitrary number of LXD servers (2, 3, 4 and more)? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Will there be an extra section added to the LXD 2.0 blog post series for the new Networking capabilities?
On 2016-09-29 12:14, Stéphane Graber wrote: >It's not going to be part of the 2.0 series since it's not in LXD 2.0, >but I'll likely be posting something about the new network stuff in the >next few weeks. "cross-host tunnels with GRE or VXLAN" Interesting! Will it be limited to 2 LXD servers only, or will it allow an arbitrary number of LXD servers (2, 3, 4 and more)? You can add as many tunnels to the configuration as you want. The default VXLAN configuration also uses multicast, so any host that's part of the same multicast group will be connected. Multicast... so that's not going to work for most of people with popular server hostings, since they don't offer multicast. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc exec - cp: Value too large for defined data type
I'm getting a weird issue with cp used with "lxc exec container-name /bin/bash /some/script.sh". /some/script.sh launches /some/other/script.sh, which in turn may run some other script. One of them is copying data with cp. In some cases, cp complains that "Value too large for defined data type", for example: '/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/php5/pool.d/phpmyadmin.conf' -> '/etc/php5/fpm/pool.d/phpmyadmin.conf' '/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/php5/pool.d/www.conf' -> '/etc/php5/fpm/pool.d/www.conf' '/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/nginx/conf.d/default.conf' -> '/etc/nginx/conf.d/default.conf' cp: '/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/nginx/conf.d/default.conf': Value too large for defined data type cp: '/vagrant/scripts/provision/shell/initial_deploy/rootfs/etc/nginx/conf.d': Value too large for defined data type "/vagrant/" is a bind-mount directory within LXD: vagrant: path: /vagrant source: /home/vagrantvm/Desktop/vagrant type: disk Anyone else seeing this? The files are copied, just the warning is a bit strange. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc exec - cp: Value too large for defined data type
On 2016-10-05 00:05, Michael Peek wrote: I could be completely wrong about everything, but here's what I think is going on: If I'm correct then the version of cp you have inside the container was compiled without large file support enabled. What constitutes a "large" file is dependent on whether you're using a 32-bit or 64-bit system. If your container is running a 32-bit image, then a large file is any file whose size is 2GB or larger in size. For containers running a 64-bit image a large file any file of size 4GB or larger. To support large files programs usually only need to be compiled with certain compiler flags enabled to tell the compiler to activate "large file"-specific code within the source, either that or the libraries that the program depends upon need to support large files by default. The container is Ubuntu 14.04; host is Ubuntu 16.04. "cp" is copying config files. They are a few hundred bytes in size, few kilobytes maximum. So must be something else. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] dockerfile equivalent in LXC
On 2016-10-05 00:13, Barve, Yogesh Damodar wrote: For creating a docker container one can use dockerfile to specify all the required software installation packages and initialization, entrypoint directory and entrypoint command. LXD or LXC virtualize the whole operating system, so some of these terms don't make sense here. What will be the equivalent in the LXC world? How can one specify - the required packages for installations, LXD installs a distribution. For example - this one will install Ubuntu 14.04 ("Trusty"): lxc launch images:ubuntu/trusty/amd64 some-container-name Then, to install any packages, you can do: lxc exec timedoctor-dev apt-get install some-package - workdirectory, - entrypoint command, These don't make sense for LXC / LXD. - ports to expose and LXC / LXD behave like proper systems with full networking. By default, container's IP is "exposed" to the host. What you do with it, depends on your usage case. There are many answers to that question I guess. 1) assign a public IP to the container 2) redirect a single port to the container with iptables or a proxy - volumes to mount in LXC? CONTAINER=some-container-name MOUNTNAME=something MOUNTPATH=/mnt/on/the/host CONTAINERPATH=/mnt/inside/the/container lxc config device add $CONTAINER $MOUNTNAME disk source=$MOUNTPATH path=$CONTAINERPATH Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc exec - cp: Value too large for defined data type
On 2016-10-05 00:41, Tomasz Chmielewski wrote: On 2016-10-05 00:05, Michael Peek wrote: I could be completely wrong about everything, but here's what I think is going on: If I'm correct then the version of cp you have inside the container was compiled without large file support enabled. What constitutes a "large" file is dependent on whether you're using a 32-bit or 64-bit system. If your container is running a 32-bit image, then a large file is any file whose size is 2GB or larger in size. For containers running a 64-bit image a large file any file of size 4GB or larger. To support large files programs usually only need to be compiled with certain compiler flags enabled to tell the compiler to activate "large file"-specific code within the source, either that or the libraries that the program depends upon need to support large files by default. The container is Ubuntu 14.04; host is Ubuntu 16.04. "cp" is copying config files. They are a few hundred bytes in size, few kilobytes maximum. So must be something else. I think it comes from the fact that the files I'm copying from were treated with setfacl on the host. And inside the container, cp is unable to recreate exactly same permissions, and outputs a confusing warning: * cp with "-a": lxd# cp -av /vagrant/0-byte-file /root '/vagrant/0-byte-file' -> '/root/0-byte-file' cp: '/vagrant/0-byte-file': Value too large for defined data type * cp without "-a": lxd# cp -v /vagrant/0-byte-file /root '/vagrant/0-byte-file' -> '/root/0-byte-file' Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc-destroy - unable to destroy a container with snapshots
The container has plenty of snapshots: # lxc-snapshot -L -n td-backupslave | wc -l 53 Is stopped: # lxc-ls -f|grep td-backupslave td-backupslaveSTOPPED 0 - - - But not able to remove it: # lxc-destroy -n td-backupslave -s lxc-destroy: lxccontainer.c: do_lxcapi_destroy: 2417 Container td-backupslave has snapshots; not removing Destroying td-backupslave failed # lxc-destroy -s -n td-backupslave lxc-destroy: lxccontainer.c: do_lxcapi_destroy: 2417 Container td-backupslave has snapshots; not removing Destroying td-backupslave failed According to help, "-s" option should destroy a container even if it has snapshots: # lxc-destroy -h Usage: lxc-destroy --name=NAME [-f] [-P lxcpath] lxc-destroy destroys a container with the identifier NAME Options : -n, --name=NAME NAME of the container -s, --snapshots destroy including all snapshots -f, --force wait for the container to shut down --rcfile=FILE Load configuration file FILE Am I misreading the help, or is it a bug? It's Ubuntu 16.04 with these lxc packages: # dpkg -l|grep lxc ii liblxc1 2.0.5-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools (library) ii lxc 2.0.5-0ubuntu1~ubuntu16.04.2 all Transitional package for lxc1 ii lxc-common 2.0.5-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools (common tools) ii lxc-templates2.0.5-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools (templates) ii lxc1 2.0.5-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools ii lxcfs2.0.4-0ubuntu1~ubuntu16.04.1 amd64FUSE based filesystem for LXC ii python3-lxc 2.0.5-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools (Python 3.x bindings) Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] LXD: network connectivity dies when doing lxc stop / lxc start
Here is a weird one, and most likely not LXD fault, but some issue with bridged networking. I'm using bridge networking for all container. The problem is that if I stop and start one container, some other containers loose connectivity. They loose connectivity for 10, 20 secs, sometimes up to a minute. For example: # ping 8.8.8.8 (...) 64 bytes from 8.8.8.8: icmp_seq=16 ttl=48 time=15.0 ms 64 bytes from 8.8.8.8: icmp_seq=17 ttl=48 time=15.0 ms 64 bytes from 8.8.8.8: icmp_seq=18 ttl=48 time=15.1 ms 64 bytes from 8.8.8.8: icmp_seq=19 ttl=48 time=15.0 ms 64 bytes from 8.8.8.8: icmp_seq=20 ttl=48 time=15.0 ms ...another container stopped/started... ...40 seconds of broken connectivity... 64 bytes from 8.8.8.8: icmp_seq=60 ttl=48 time=15.1 ms 64 bytes from 8.8.8.8: icmp_seq=61 ttl=48 time=15.1 ms 64 bytes from 8.8.8.8: icmp_seq=62 ttl=48 time=15.1 ms 64 bytes from 8.8.8.8: icmp_seq=63 ttl=48 time=15.0 ms Pinging the gateway dies in a similar way. The networking is as follows: containers - eth0, private addressing (192.168.0.x) host - "ctbr0" - private address (192.168.0.1), plus NAT into the world auto ctbr0 iface ctbr0 inet static address 192.168.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 The only workaround seems to be arpinging the gateway from the container all the time, for example: # arping 192.168.0.1 This way, the container doesn't loose connectivity when other containers are stopped/started. But of course I don't like this kind of fix. Is anyone else seeing this too? Any better workaround that constant arping from all affected containers? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes
On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote: Hi, I'm wondering what kind of storage are you using in your infrastructure ? In a multiple LXC/LXD nodes how would you design the storage part to be redundant and give you the flexibility to start a container from any host available ? Let's say I have two (or more) LXC/LXD nodes and I want to be able to start the containers on one or the other node. LXD allow to move containers across nodes by transferring the data from node A to node B but I'm looking to be able to run the containers on node B if node A is in maintenance or crashed. There is a lot of distributed file system (gluster, ceph, beegfs, swift etc..) but I my case, I like using ZFS with LXD and I would like to try to keep that possibility . If you want to stick with ZFS, then your only option is setting up DRBD. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes
ZFS is not a distributed filesystem. So the only way to do what you want is to use DRBD, and ZFS on top of it. Tomasz Chmielewski https://lxadm.com On 2016-11-03 22:42, Benoit GEORGELIN - Association Web4all wrote: Thanks, looks like nobody use LXD in a cluster Cordialement, Benoît - DE: "Tomasz Chmielewski" À: "lxc-users" CC: "Benoit GEORGELIN - Association Web4all" ENVOYÉ: Mercredi 2 Novembre 2016 12:01:50 OBJET: Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote: Hi, I'm wondering what kind of storage are you using in your infrastructure ? In a multiple LXC/LXD nodes how would you design the storage part to be redundant and give you the flexibility to start a container from any host available ? Let's say I have two (or more) LXC/LXD nodes and I want to be able to start the containers on one or the other node. LXD allow to move containers across nodes by transferring the data from node A to node B but I'm looking to be able to run the containers on node B if node A is in maintenance or crashed. There is a lot of distributed file system (gluster, ceph, beegfs, swift etc..) but I my case, I like using ZFS with LXD and I would like to try to keep that possibility . If you want to stick with ZFS, then your only option is setting up DRBD. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxc copy local remote: - how to copy a container without snapshots?
lxc copy local remote: seems to copy a container with all its snapshots. Is it possible to use "lxc copy local remote:" so it just copies /var/lib/lxc/containers/local, without snapshots? Tried these on the "local" side (with 2.0.5 on the remote): # lxc --version 2.6.2 # lxc --version 2.0.5 Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Strange freezes with btrfs backend
On 2016-12-03 18:31, Pierce Ng wrote: On Sat, Dec 03, 2016 at 11:49:02AM +0700, Sergiusz Pawlowicz wrote: With 1GB of memory is is not recommended to use ZFS not BTRFS, especially via a disk image file. Just forget about it. The same VPS - as same as a VPS can be :-) - ran LXC on Ubuntu 14.04 fine. For my work load perhaps the directory or LVM backends for LXD are good enough. Don't use a file-based image for a different filesystem - your performance will be poor, and you risk loosing the whole filesystem if something wrong happens to your main fs. Exceptions to this rule are perhaps: - testing - recovery - KVM set up to use a file-based image for a VM (still, not perfect) If you want to use LXD with btrfs, it's still quite easy to do, though stick to the following rules: - btrfs should be placed on a separate block device (disk/partition) - so, let's say /dev/xvdb1 mounted to /var/lib/lxd - btrfs is still not very stable (no RAID and RAID-1 levels are fine for basic workloads, though you may still hit an occasional hiccup) - use the most recent stable kernel with btrfs (for Ubuntu, you can get it from http://kernel.ubuntu.com/~kernel-ppa/mainline/ - you'll need "raw.lxc: lxc.aa_allow_incomplete=1" in container's config); kernel 4.4 used in Ubuntu 16.04 is too old and you will run into problems after some time! - chattr +C on database (mysql, postgres, mongo, elasticsearch) directories, otherwise, performance will be very poor (note chattr +C only works on directories and empty files; for existing files, you'll have to move them out and copy back in) Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] glusterfs on LXD?
Ubuntu 16.04 hosts / containers and gluster 3.8: # gluster volume create storage replica 2 transport tcp serv1:/gluster/data serv2:/gluster/data force volume create: storage: failed: Glusterfs is not supported on brick: serv1:/gluster/data. Setting extended attributes failed, reason: Operation not permitted. Host filesystem on both bricks supports xattr - but container can only set user attributes, not trusted attributes: # touch file # setfattr -n user.some -v "thing" file # getfattr file # file: file user.some # setfattr -n trusted.some2 -v "thing2" file setfattr: file: Operation not permitted Anyone managed to run glusterfs on LXD? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] glusterfs on LXD?
On 2016-12-22 11:56, Tomasz Chmielewski wrote: Ubuntu 16.04 hosts / containers and gluster 3.8: # gluster volume create storage replica 2 transport tcp serv1:/gluster/data serv2:/gluster/data force volume create: storage: failed: Glusterfs is not supported on brick: serv1:/gluster/data. Setting extended attributes failed, reason: Operation not permitted. Host filesystem on both bricks supports xattr - but container can only set user attributes, not trusted attributes: # touch file # setfattr -n user.some -v "thing" file # getfattr file # file: file user.some # setfattr -n trusted.some2 -v "thing2" file setfattr: file: Operation not permitted Anyone managed to run glusterfs on LXD? I see it does run if the container is run as privileged: # lxc config set serv1 security.privileged true But perhaps not sysadmin's dream. Is there any other way to allow the container setting trusted attr? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] systemd-cgtop - Input/s Output/s empty?
Not really LXD issue, but since systemd-cgtop is quite useful to check how busy the containers are, posting it here. Does anyone know how to make systemd-cgtop show Input/s Output/s? Right now, it's only showing Tasks, %CPU, Memory - but Input/s and Output/s column just show "-". I've tried setting DefaultBlockIOAccounting=yes in /etc/systemd/system.conf, but it doesn't change anything (even after systemd reload, system restart). Any hints? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] moving containers from amd host to Intel host
On 2017-01-23 09:10, Jules wrote: greetings, been using LXC for around 2 years, originally converted from vserver but both hosts were on AMD. i'm not moving to a new host that is running intel. is there any magic that can be applied to convert an AMD based container so that it can boot on Intel or am i going to be rebuilding the containers pretty much from scratch? You need the same magic as booting for example a x86-64 livecd on an Intel system or AMD system... Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"
0, 0x0, 0x0) Jan 31 06:36:06 lxd01 lxd[21952]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 +0x5c Jan 31 06:36:06 lxd01 lxd[21952]: main.(*storageLogWrapper).ContainerSnapshotDelete(0xc820107a60, 0x7f0acd705938, 0xc8204c8180, 0x0, 0x0) Jan 31 06:36:06 lxd01 lxd[21952]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 +0x22b Jan 31 06:36:06 lxd01 lxd[21952]: main.(*containerLXC).Delete(0xc8204c8180, 0x0, 0x0) Jan 31 06:36:06 lxd01 lxd[21952]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 +0x30e Jan 31 06:36:06 lxd01 lxd[21952]: main.snapshotDelete.func1(0xc8200d42c0, 0x0, 0x0) Jan 31 06:36:06 lxd01 lxd[21952]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_snapshot.go:248 +0x3e Jan 31 06:36:06 lxd01 lxd[21952]: main.(*operation).Run.func1(0xc8200d42c0, 0xc820011740) Jan 31 06:36:06 lxd01 lxd[21952]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 +0x3a Jan 31 06:36:06 lxd01 lxd[21952]: created by main.(*operation).Run Jan 31 06:36:06 lxd01 lxd[21952]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 +0x127 Jan 31 06:46:06 lxd01 lxd[21955]: error: LXD still not running after 600s timeout. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"
I think it may be related to https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/ I have a docker container, with several dockers inside, and with lxd snapshots. Doing this: # lxc delete docker error: Get http://unix.socket/1.0/operations/7d30bf41-3af6-4b48-b42c-06fdd2bba48b/wait: EOF Results in: Jan 31 07:28:36 lxd01 lxd[8363]: panic: runtime error: slice bounds out of range Jan 31 07:28:36 lxd01 lxd[8363]: goroutine 867 [running]: Jan 31 07:28:36 lxd01 lxd[8363]: panic(0xadef00, 0xc82000e050) Jan 31 07:28:36 lxd01 lxd[8363]: #011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6 Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).getSubVolumes(0xc8200f8240, 0xc82000bb80, 0x3a, 0x0, 0x0, 0x0, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812 +0x104f Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).subvolsDelete(0xc8200f8240, 0xc82000bb80, 0x3a, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574 +0x72 Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).ContainerDelete(0xc8200f8240, 0x7f5f55ba1900, 0xc8204a0480, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:119 +0xb0 Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).ContainerSnapshotDelete(0xc8200f8240, 0x7f5f55ba1900, 0xc8204a0480, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 +0x5c Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageLogWrapper).ContainerSnapshotDelete(0xc8200fda60, 0x7f5f55ba1900, 0xc8204a0480, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 +0x22b Jan 31 07:28:36 lxd01 lxd[8363]: main.(*containerLXC).Delete(0xc8204a0480, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 +0x30e Jan 31 07:28:36 lxd01 lxd[8363]: main.containerDeleteSnapshots(0xc820090a00, 0xc82017c017, 0x6, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/containers.go:208 +0x4ac Jan 31 07:28:36 lxd01 lxd[8363]: main.(*containerLXC).Delete(0xc8204a00c0, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2371 +0x696 Jan 31 07:28:36 lxd01 lxd[8363]: main.containerDelete.func1(0xc8200c8840, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_delete.go:22 +0x3e Jan 31 07:28:36 lxd01 lxd[8363]: main.(*operation).Run.func1(0xc8200c8840, 0xc82011f320) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 +0x3a Jan 31 07:28:36 lxd01 lxd[8363]: created by main.(*operation).Run Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 +0x127 Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Unit entered failed state. Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Failed with result 'exit-code'. Jan 31 07:28:36 lxd01 systemd[1]: lxd.service: Service hold-off time over, scheduling restart. Jan 31 07:28:36 lxd01 systemd[1]: Stopped LXD - main daemon. Jan 31 07:28:36 lxd01 systemd[1]: Starting LXD - main daemon... Jan 31 07:28:36 lxd01 lxd[11969]: t=2017-01-31T07:28:36+ lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." Jan 31 07:28:37 lxd01 systemd[1]: Started LXD - main daemon. Tomasz On 2017-01-31 16:17, Tomasz Chmielewski wrote: lxd process on one of my servers started to hang a few days ago. In syslog, I can see the following being repeated (log below). I see a similar issue here: https://github.com/lxc/lxd/issues/2089 but it's closed. Running: ii lxd 2.0.8-0ubuntu1~ubuntu16.04.2 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.8-0ubuntu1~ubuntu16.04.2 amd64Container hypervisor based on LXC - client On Ubuntu 16.04 with 4.9.0 kernel from ubuntu ppa. It seems to recover if I kill this process: root 26853 0.0 0.0 222164 12228 ?Ssl 07:13 0:00 /usr/bin/lxd waitready --timeout=600 Sometimes need to kill it a few times until it recovers. Jan 31 06:26:06 lxd01 lxd[15762]: error: LXD still not running after 600s timeout. Jan 31 06:26:06 lxd01 lxd[4402]: t=2017-01-31T06:26:06+ lv
Re: [lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"
Unfortunately it's still crashing, around 1 day after removing the docker container: Feb 1 06:16:20 lxd01 lxd[2593]: error: LXD still not running after 600s timeout. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Control process exited, code=exited status=1 Feb 1 06:16:20 lxd01 systemd[1]: Failed to start LXD - main daemon. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Unit entered failed state. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Failed with result 'exit-code'. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Service hold-off time over, scheduling restart. Feb 1 06:16:20 lxd01 systemd[1]: Stopped LXD - main daemon. Feb 1 06:16:20 lxd01 systemd[1]: Starting LXD - main daemon... Feb 1 06:16:20 lxd01 lxd[29235]: t=2017-02-01T06:16:20+ lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." Feb 1 06:16:20 lxd01 lxd[29235]: panic: runtime error: slice bounds out of range Feb 1 06:16:20 lxd01 lxd[29235]: goroutine 16 [running]: Feb 1 06:16:20 lxd01 lxd[29235]: panic(0xadef00, 0xc82000e050) Feb 1 06:16:20 lxd01 lxd[29235]: #011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6 Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).getSubVolumes(0xc8200fa240, 0xc82000b880, 0x3a, 0x0, 0x0, 0x0, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812 +0x104f Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).subvolsDelete(0xc8200fa240, 0xc82000b880, 0x3a, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574 +0x72 Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).ContainerDelete(0xc8200fa240, 0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:119 +0xb0 Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).ContainerSnapshotDelete(0xc8200fa240, 0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 +0x5c Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageLogWrapper).ContainerSnapshotDelete(0xc8200fda60, 0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 +0x22b Feb 1 06:16:20 lxd01 lxd[29235]: main.(*containerLXC).Delete(0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 +0x30e Feb 1 06:16:20 lxd01 lxd[29235]: main.snapshotDelete.func1(0xc8205400b0, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_snapshot.go:248 +0x3e Feb 1 06:16:20 lxd01 lxd[29235]: main.(*operation).Run.func1(0xc8205400b0, 0xc820258f60) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 +0x3a Feb 1 06:16:20 lxd01 lxd[29235]: created by main.(*operation).Run Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 +0x127 Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Any clues what's causing this and how to fix? Tomasz On 2017-01-31 16:29, Tomasz Chmielewski wrote: I think it may be related to https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/ I have a docker container, with several dockers inside, and with lxd snapshots. Doing this: # lxc delete docker error: Get http://unix.socket/1.0/operations/7d30bf41-3af6-4b48-b42c-06fdd2bba48b/wait: EOF Results in: Jan 31 07:28:36 lxd01 lxd[8363]: panic: runtime error: slice bounds out of range Jan 31 07:28:36 lxd01 lxd[8363]: goroutine 867 [running]: Jan 31 07:28:36 lxd01 lxd[8363]: panic(0xadef00, 0xc82000e050) Jan 31 07:28:36 lxd01 lxd[8363]: #011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6 Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).getSubVolumes(0xc8200f8240, 0xc82000bb80, 0x3a, 0x0, 0x0, 0x0, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812 +0x104f Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).subvolsDelete(0xc8200f8240, 0xc82000bb80, 0x3a, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574 +0x72 Jan 31 07:28:36 lxd01 lxd[8363]: main.(*storageBtrfs).ContainerDelete(0xc8200f8240, 0x7f5f55ba1900, 0xc8204a0480, 0x0, 0x0) Jan 31 07:28:36 lxd01 lxd[8363]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-
Re: [lxc-users] lxd hang - "panic: runtime error: slice bounds out of range"
FYI it was crashing in the following conditions: - privileged container (i.e. the one for docker) - on btrfs filesystem - btrfs subvolume created inside the container (docker would create such subvolumes) - snapshot of the container made Will your fix eventually go to i.e. 2.0.9? Tomasz On 2017-02-01 19:02, Stéphane Graber wrote: Hey there, I wrote a fix for that function just yesterday which I think should fix your issue. It's been merged in git but isn't in any released version of LXD yet. Since you're using LXD 2.0.8, I made a build of 2.0.8 with that one fix applied at: https://dl.stgraber.org/lxd-2.0.8-btrfs SHA256: 4d9a7ef7c4635d7dd6c3e41f0eb1a3db12d38a8148b3940aa801c7355510e815 Stéphane On Wed, Feb 01, 2017 at 03:19:44PM +0900, Tomasz Chmielewski wrote: Unfortunately it's still crashing, around 1 day after removing the docker container: Feb 1 06:16:20 lxd01 lxd[2593]: error: LXD still not running after 600s timeout. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Control process exited, code=exited status=1 Feb 1 06:16:20 lxd01 systemd[1]: Failed to start LXD - main daemon. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Unit entered failed state. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Failed with result 'exit-code'. Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Service hold-off time over, scheduling restart. Feb 1 06:16:20 lxd01 systemd[1]: Stopped LXD - main daemon. Feb 1 06:16:20 lxd01 systemd[1]: Starting LXD - main daemon... Feb 1 06:16:20 lxd01 lxd[29235]: t=2017-02-01T06:16:20+ lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." Feb 1 06:16:20 lxd01 lxd[29235]: panic: runtime error: slice bounds out of range Feb 1 06:16:20 lxd01 lxd[29235]: goroutine 16 [running]: Feb 1 06:16:20 lxd01 lxd[29235]: panic(0xadef00, 0xc82000e050) Feb 1 06:16:20 lxd01 lxd[29235]: #011/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6 Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).getSubVolumes(0xc8200fa240, 0xc82000b880, 0x3a, 0x0, 0x0, 0x0, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:812 +0x104f Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).subvolsDelete(0xc8200fa240, 0xc82000b880, 0x3a, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:574 +0x72 Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).ContainerDelete(0xc8200fa240, 0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:119 +0xb0 Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageBtrfs).ContainerSnapshotDelete(0xc8200fa240, 0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage_btrfs.go:321 +0x5c Feb 1 06:16:20 lxd01 lxd[29235]: main.(*storageLogWrapper).ContainerSnapshotDelete(0xc8200fda60, 0x7feeacffd0c0, 0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/storage.go:510 +0x22b Feb 1 06:16:20 lxd01 lxd[29235]: main.(*containerLXC).Delete(0xc8204fa480, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_lxc.go:2366 +0x30e Feb 1 06:16:20 lxd01 lxd[29235]: main.snapshotDelete.func1(0xc8205400b0, 0x0, 0x0) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/container_snapshot.go:248 +0x3e Feb 1 06:16:20 lxd01 lxd[29235]: main.(*operation).Run.func1(0xc8205400b0, 0xc820258f60) Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:110 +0x3a Feb 1 06:16:20 lxd01 lxd[29235]: created by main.(*operation).Run Feb 1 06:16:20 lxd01 lxd[29235]: #011/build/lxd-tfF8X9/lxd-2.0.8/obj-x86_64-linux-gnu/src/github.com/lxc/lxd/lxd/operations.go:137 +0x127 Feb 1 06:16:20 lxd01 systemd[1]: lxd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Any clues what's causing this and how to fix? Tomasz On 2017-01-31 16:29, Tomasz Chmielewski wrote: > I think it may be related to > https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/ > > I have a docker container, with several dockers inside, and with lxd > snapshots. > > Doing this: > > # lxc delete docker > error: Get > http://unix.socket/1.0/operations/7d30bf41-3af6-4b48-b42c-06fdd2bba48b/wait: > EOF > > Results in: > > Jan 31 07:28:36 lxd01 lxd[8363]: panic: runtime error: slice bounds out > of range > Jan 31 07:28:36 lxd01 lxd[8363]: goroutine 867 [running]
[lxc-users] lxc stop / lxc reboot hang
Suddenly, today, I'm not able to stop or reboot any of containers: # lxc stop some-container Just sits there forever. In /var/log/lxd/lxd.log, only this single entry shows up: t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container" creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s name=some-container action=shutdown In /var/log/lxd/some-container/lxc.log, only this one shows up: lxc 20170203034624.534 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive response Running these: ii lxd 2.0.8-0ubuntu1~ubuntu16.04.2 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.8-0ubuntu1~ubuntu16.04.2 amd64Container hypervisor based on LXC - client ii liblxc1 2.0.6-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools (library) ii lxc-common 2.0.6-0ubuntu1~ubuntu16.04.2 amd64Linux Containers userspace tools (common tools) ii lxcfs2.0.5-0ubuntu1~ubuntu16.04.1 amd64FUSE based filesystem for LXC # uname -a Linux lxd01 4.9.0-040900-generic #201612111631 SMP Sun Dec 11 21:33:00 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Any clues how to fix this? Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc stop / lxc reboot hang
On 2017-02-03 12:52, Tomasz Chmielewski wrote: Suddenly, today, I'm not able to stop or reboot any of containers: # lxc stop some-container Just sits there forever. In /var/log/lxd/lxd.log, only this single entry shows up: t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container" creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s name=some-container action=shutdown In /var/log/lxd/some-container/lxc.log, only this one shows up: lxc 20170203034624.534 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive response The container actually stops (it's in STOPPED state in "lxc list"). The command just never returns. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc stop / lxc reboot hang
Again seeing this issue on one of the servers: - "lxc stop container" will stop the container but will never exit - "lxc restart container" will stop the container and will never exit # dpkg -l|grep lxd ii lxd 2.0.9-0ubuntu1~16.04.2 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.9-0ubuntu1~16.04.2 amd64Container hypervisor based on LXC - client This gets logged to container log: lxc 20170301115514.738 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer. How can it be debugged? Tomasz On 2017-02-03 13:05, Tomasz Chmielewski wrote: On 2017-02-03 12:52, Tomasz Chmielewski wrote: Suddenly, today, I'm not able to stop or reboot any of containers: # lxc stop some-container Just sits there forever. In /var/log/lxd/lxd.log, only this single entry shows up: t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container" creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s name=some-container action=shutdown In /var/log/lxd/some-container/lxc.log, only this one shows up: lxc 20170203034624.534 WARN lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive response The container actually stops (it's in STOPPED state in "lxc list"). The command just never returns. Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] lxd process using lots of CPU
On a server with several ~idlish containers: PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 19104 root 20 0 2548M 44132 15236 S 140. 0.0 58h03:17 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 24966 root 20 0 2548M 44132 15236 S 18.2 0.0 2h45:36 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19162 root 20 0 2548M 44132 15236 S 17.5 0.0 3h31:49 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19120 root 20 0 2548M 44132 15236 S 16.2 0.0 3h16:11 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19244 root 20 0 2548M 44132 15236 S 11.0 0.0 1h48:56 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19123 root 20 0 2548M 44132 15236 S 11.0 0.0 3h34:42 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19243 root 20 0 2548M 44132 15236 S 10.4 0.0 1h06:13 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 14962 root 20 0 2548M 44132 15236 R 10.4 0.0 3h17:27 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19356 root 20 0 2548M 44132 15236 S 9.7 0.0 2h16:44 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19161 root 20 0 2548M 44132 15236 R 9.7 0.0 1h26:40 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19126 root 20 0 2548M 44132 15236 R 9.1 0.0 22:11.20 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 19115 root 20 0 2548M 44132 15236 R 8.4 0.0 2h55:21 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log 693 root 20 0 2548M 44132 15236 R 8.4 0.0 2h28:02 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log That's actually one lxd process with many threads; view from htop. Expected? ii liblxc12.0.7-0ubuntu1~16.04.1 amd64Linux Containers userspace tools (library) ii lxc-common 2.0.7-0ubuntu1~16.04.1 amd64Linux Containers userspace tools (common tools) ii lxcfs 2.0.6-0ubuntu1~16.04.1 amd64FUSE based filesystem for LXC ii lxd2.0.9-0ubuntu1~16.04.2 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.9-0ubuntu1~16.04.2 amd64Container hypervisor based on LXC - client strace of the process mainly shows: [pid 19124] poll([{fd=28, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19120] poll([{fd=13, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19124] <... poll resumed> )= 1 ([{fd=28, revents=POLLNVAL}]) [pid 19120] <... poll resumed> )= 1 ([{fd=13, revents=POLLNVAL}]) [pid 19124] poll([{fd=28, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19120] poll([{fd=13, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19124] <... poll resumed> )= 1 ([{fd=28, revents=POLLNVAL}]) [pid 19120] <... poll resumed> )= 1 ([{fd=13, revents=POLLNVAL}]) [pid 19124] poll([{fd=28, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19120] poll([{fd=13, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19124] <... poll resumed> )= 1 ([{fd=28, revents=POLLNVAL}]) [pid 19120] <... poll resumed> )= 1 ([{fd=13, revents=POLLNVAL}]) [pid 19124] poll([{fd=28, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19120] poll([{fd=13, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19124] <... poll resumed> )= 1 ([{fd=28, revents=POLLNVAL}]) [pid 19120] <... poll resumed> )= 1 ([{fd=13, revents=POLLNVAL}]) [pid 19124] poll([{fd=28, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19120] poll([{fd=13, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19124] <... poll resumed> )= 1 ([{fd=28, revents=POLLNVAL}]) [pid 19120] <... poll resumed> )= 1 ([{fd=13, revents=POLLNVAL}]) [pid 19124] poll([{fd=28, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 [pid 19120] poll([{fd=13, events=POLLIN|POLLPRI|POLLERR|POLLHUP|0x2000}], 1, -1 Tomasz Chmielewski https://lxadm.com ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users