[lxc-users] Passing infiniband network interface as phys to the LXC
Hello fellow LXC users! I have hit a brick wall with a problem of trying to pass an infiniband network interface inside a container. The host is Ubuntu 14.04.1 LTS (Trusty Tahr) $ uname -a Linux MYHOST 3.19.0-49-generic #55~14.04.1-Ubuntu SMP Fri Jan 22 11:24:31 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux these are the infiniband network interfaces: $ ifconfig ib0 Link encap:UNSPEC HWaddr 80-00-04-04-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.0.254 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::202:c902:2a:7c31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65520 Metric:1 RX packets:7456285 errors:0 dropped:0 overruns:0 frame:0 TX packets:9528644 errors:0 dropped:130 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:276184857127 (276.1 GB) TX bytes:16792176768 (16.7 GB) ib1 Link encap:UNSPEC HWaddr 80-00-04-05-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.1.254 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::202:c902:2a:7c32/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65520 Metric:1 RX packets:4212 errors:0 dropped:0 overruns:0 frame:0 TX packets:2234 errors:0 dropped:127 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:671192 (671.1 KB) TX bytes:481417 (481.4 KB) I create a Ubuntu Trust container called "mycontainer" with an alternate container path set to /home/sum/sumLXC $ sudo lxc-create -t download -n mycontainer -P /home/sum/sumLXC -- -d ubuntu -r trusty -a amd64 I add ib0 / ib1 as a phys network type to the mycontainer's config file /home/sum/sumLXC/mycontainer/config # ib0 configuration lxc.network.type = phys lxc.network.flags = up lxc.network.link = ib0 lxc.network.ipv4 = 192.168.0.63/24 # ib1 configuration lxc.network.type = phys lxc.network.flags = up lxc.network.link = ib1 lxc.network.ipv4 = 192.168.1.63/24 I start up the container $ sudo lxc-start -n mycontainer -P /home/sum/sumLXC Now ib0 / iib1 disappears from the host. I attach to the container $ sudo lxc-attach -n mycontainer -P /home/sum/sumLXC Now inside the container I see ib0 / ib1 root@mycontainer:/# ifconfig ib0 Link encap:UNSPEC HWaddr 80-00-04-04-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.0.63 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::202:c902:2a:7c31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65520 Metric:1 RX packets:7456338 errors:0 dropped:0 overruns:0 frame:0 TX packets:9528739 errors:0 dropped:144 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:276184863867 (276.1 GB) TX bytes:16792190084 (16.7 GB) ib1 Link encap:UNSPEC HWaddr 80-00-04-05-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.1.63 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::202:c902:2a:7c32/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65520 Metric:1 RX packets:4288 errors:0 dropped:0 overruns:0 frame:0 TX packets:2279 errors:0 dropped:141 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:682135 (682.1 KB) TX bytes:489120 (489.1 KB) The kernel version for the container matches that of the host root@mycontainer:/# uname -a Linux mycontainer 3.19.0-49-generic #55~14.04.1-Ubuntu SMP Fri Jan 22 11:24:31 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux I install the infiniband software inside the container root@mycontainer:/# apt-get install ibverbs-utils opensm infiniband-diags libopensm5 libibnetdisc5 libibverbs1 libibumad3 libibmad5 libibcommon1 libmlx4-1 libipathverbs1 libmthca1 along with the kernel headers root@mycontainer:/# apt-get install libmthca-dev linux-headers-3.19.0-49 linux-image-extra-3.19.0-49-generic linux-headers-3.19.0-49-generic along with infiniband testing software root@mycontainer:/# apt-get install perftest next I add the /dev/infiniband special files to the container (done on host) $ sudo lxc-device -n mycontainer -P /home/sum/sumLXC add /dev/infiniband/uverbs0 $ sudo lxc-device -n mycontainer -P /home/sum/sumLXC add /dev/infiniband/issm0 $ sudo lxc-device -n mycontainer -P /home/sum/sumLXC add /dev/infiniband/issm1 $ sudo lxc-device -n mycontainer -P /home/sum/sumLXC add /dev/infiniband/rdma_cm $ sudo lxc-device -n mycontainer -P /home/sum/sumLXC add /dev/infiniband/umad0 $ sudo lxc-device -n mycontainer -P /home/sum/sumLXC add /dev/infiniband/umad1 Inside the container, give same permissions as the host root@mycontainer:/# chmod a+w /dev/infiniband/rdma_cm root@mycontainer:/# chmod a+w /dev/infiniband/uverbs0 So now inside the container /dev/infiniband exists like this: root@mycontainer:/# ls -lh /dev/infiniband/ total 0 crw--- 1 root root 231, 64 Mar 10 06:55 issm0 crw--- 1 root root 231, 65 Mar 10 06:56 issm1 crw-rw-rw- 1 root root 10, 56 Mar 10 06:56 rdma_cm crw---
[lxc-users] LXD 2.0.0.rc2 -- IP Tables -- Ubuntu 15.10 -- not responding
I jus tried installing 15.10 on a 3 different test servers with LXD 2.0.0.rc2. The ip tables rules I had been using with 14.04 would not work. Here is an example: iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8080 -j DNAT --to-destination 10.0.3.250:8080 iptables -A FORWARD -p tcp -d 10.0.3.250 --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A OUTPUT -p tcp -o lo --dport 8080 -j DNAT --to-destination 10.0.3.250:8080 In the past I have used these iptable rules to allow outside of the local lan access to a container when needed. Curious is anyone else running into this issue with 15.10? If so, what did you do to allow outside public access to a container? Since I jumped from 14.04 + LXD 0.9 to 1510 2.0.0.rc2 in my testing and it broke. I have no idea if there is new or better way that has been introduced since 0.9 when needing direct public access to a container while using the default lxcbr0 bridge + 10.0.3.x DHCP setup. I ended up rolling back to 15.04 for now and it’s all back to working as it was in 14.04. Thanks for any thoughts or insights? -Kevin ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc / lxd I'm lost somewhere
I'm currently switching from pure LXC to LXD I have few questions :) 1- subuid /subgid Can LXD use different uid/gid from container configuration ? Let's say I have one LXD daemon running . This daemon is using suidui/gid from the user who launch de container. Can I have different id/gid mapping for this daemon ? Exemple : Container A: 10:65536 Container B: 165536:65536 If the user from conainter A escape from namespace he will be as uig/gid A , container B stay "safe" from user container A access. 2- IP address and mac address Is the only way to get the mac address assigne to the container is to go inside de container ? No lxc command to get the info ? lxc info container retrieve the IP address , not the mac address So the only way will be to set static mac address in configuration file then show the configuration of the container and parse it to get the mac :( To set and configure Openvwitch, I need the interface name, ip and mac 3- config vs profile what it the best option to set container configuration ? Can I keep the config file as generated by the first launch and make my own profile configuration or should I edit the config of the container and only apply profile to share same custom configuration ? Let's say if I wanna custom container configuration (from script) and add a device type nic (eth0) Should I use "lxc config device add ..." or should I dump the initial configuration to a yml file , add the device information , reload config from stdin should I keep the initial configuration file and create a new template , custom the template and finally apply the template ? 4- Veth / Bridged In LXC i could not have a specific name for the nic in unpriv container. (veth) Looks like now with LXD it's possible (bridged) ? 5- Unpriv container If the init process from the host point of view is running with specific uid/gid means that the container is well running as unpriv ? lxd monitor process runs as the user who launch lxd daemon right ? 6- Any openvswitch integration (or other virtual switch ) sheduled ? Not full integration , just basic settings and some open flow rules for security 7- Quota with btrfs I saw LXD support quota with some backend storage. How using it with BTRFS ? Is it part from LXD container configuration or does it rely on FS configuration . No information about it on the doc https://github.com/lxc/lxd/blob/master/specs/configuration.md Thanks a lot for your time and help (again) :) Cordialement, Benoît De: "Serge Hallyn" À: "lxc-users" Envoyé: Mardi 1 Mars 2016 20:05:35 Objet: Re: [lxc-users] lxc / lxd I'm lost somewhere Quoting Mark Constable (ma...@renta.net): > On 02/03/16 04:55, Serge Hallyn wrote: > >For instance I have my local laptop and a (very) remote server. > > Thanks for this example usage. > > >I can 'lxc launch xenial h:x1; lxc file push my.tar.gz h:x1/; lxc > >shell h:x1' and the fact that x1 is running on 'h' on a different > >continent really doesn't matter a lick. it's the same thing I'd > >do locally - 'lxc launch xenial x1; lxc file push my.tar.gz x1; > >lxc shell x1'. > > Is the above "shell" command available in the RCs perhaps? > > It's not available in 2.0.0~beta4-0ubuntu7. No, my ~/.config/lxc/config.yml has aliases: shell: exec @ARGS@ -- bash ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxc stop / lxc reboot stopped working
For "lxc restart", this reproduces reliably (below). It seems that there may be some race - if "sleep" is set to lower values, it seems more likely that it will fail. # while true; do echo restart time lxc restart containername sleep 3 done restart real0m15.448s user0m0.048s sys 0m0.000s restart real0m11.373s user0m0.052s sys 0m0.004s restart real0m13.019s user0m0.048s sys 0m0.000s restart real0m6.023s user0m0.040s sys 0m0.008s restart real0m7.106s user0m0.048s sys 0m0.000s restart real0m5.520s user0m0.044s sys 0m0.004s restart real0m49.382s user0m0.052s sys 0m0.000s restart real0m33.426s user0m0.048s sys 0m0.000s restart ...hangs here... Tomasz On 2016-03-11 02:23, Tomasz Chmielewski wrote: Something like this reproduces it for me reliably (hangs on the first or second "stop"): while true; do echo stop time lxc stop containername --debug sleep 5 echo start lxc start containername done Tomasz On 2016-03-11 01:35, Tomasz Chmielewski wrote: Am I the only one affected? Also happens with: ii lxd 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools "lxc restart containername" mostly just hangs. Tomasz On 2016-03-09 17:53, Tomasz Chmielewski wrote: After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}} DBUG[03-09|08:50:05] Putting {"action":"stop","force":false,"stateful":false,"timeout":-1} to http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":
Re: [lxc-users] lxc stop / lxc reboot stopped working
Something like this reproduces it for me reliably (hangs on the first or second "stop"): while true; do echo stop time lxc stop containername --debug sleep 5 echo start lxc start containername done Tomasz On 2016-03-11 01:35, Tomasz Chmielewski wrote: Am I the only one affected? Also happens with: ii lxd 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools "lxc restart containername" mostly just hangs. Tomasz On 2016-03-09 17:53, Tomasz Chmielewski wrote: After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}} DBUG[03-09|08:50:05] Putting {"action":"stop","force":false,"stateful":false,"timeout":-1} to http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"} DBUG[03-09|08:50:05] 1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait Just sits and hangs here. Is there any quick fix for that? Other than that - do you have any system which checks basic functionality before pushing the packages to general public? Seems we had lots of bugs making lxd unusable lately. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxco
Re: [lxc-users] lxc stop / lxc reboot stopped working
Am I the only one affected? Also happens with: ii lxd 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu3~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools "lxc restart containername" mostly just hangs. Tomasz On 2016-03-09 17:53, Tomasz Chmielewski wrote: After the latest lxd update, lxc stop / lxc reboot no longer work (and hang instead). # dpkg -l|grep lxd ii lxd 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - daemon ii lxd-client 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - client ii lxd-tools 2.0.0~rc2-0ubuntu2~ubuntu14.04.1~ppa1 amd64Container hypervisor based on LXC - extra tools # lxc stop z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26 --debug DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"core.https_address":"10.190.0.1:8443","core.trust_password":true},"environment":{"addresses":["10.190.0.1:8443"],"architectures":["x86_64","i686"],"certificate":"-BEGIN CERTIFICATE- (...) -END CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc5","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.4.4-040404-generic","server":"lxd","server_pid":22764,"server_version":"2.0.0.rc2","storage":"btrfs","storage_version":"4.4"},"public":false}} DBUG[03-09|08:50:05] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":"x86_64","config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"created_at":"2016-03-09T08:22:27Z","devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"ephemeral":false,"expanded_config":{"raw.lxc":"lxc.aa_allow_incomplete=1","volatile.base_image":"1032903165a677e18ed93bde5057ae6287841ae756d1a6296eef8f2e5a825e4a","volatile.eth0.hwaddr":"00:16:3e:e4:36:64","volatile.eth0.name":"eth0","volatile.last_state.idmap":"[{\"Isuid\ ":true,\"Isgid\":false,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":10,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"nictype":"bridged","parent":"br-testing","type":"nic"},"root":{"path":"/","type":"disk"},"uploads":{"path":"/var/www/uploads","source":"/srv/deployment/uploads","type":"disk"}},"name":"z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26","profiles":["default"],"stateful":false,"status":"Running","status_code":103}} DBUG[03-09|08:50:05] Putting {"action":"stop","force":false,"stateful":false,"timeout":-1} to http://unix.socket/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26/state DBUG[03-09|08:50:05] Raw response: {"type":"async","status":"Operation created","status_code":100,"metadata":{"id":"818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d","class":"task","created_at":"2016-03-09T08:50:05.465171729Z","updated_at":"2016-03-09T08:50:05.465171729Z","status":"Running","status_code":103,"resources":{"containers":["/1.0/containers/z-testing-a19ea622182c63ddc19bb22cde982b82-2016-03-09-08-22-26"]},"metadata":null,"may_cancel":false,"err":""},"operation":"/1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d"} DBUG[03-09|08:50:05] 1.0/operations/818e6b3c-9e2a-4fb3-a774-d00df8fb5c3d/wait Just sits and hangs here. Is there any quick fix for that? Other than that - do you have any system which checks basic functionality before pushing the packages to general public? Seems we had lots of bugs making lxd unusable lately. Tomasz Chmielewski http://wpkg.org ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Question about LXD using devicemapper
Does LXD (or LXC) take advantage of devicemapper copy-on-write for snapshots? -- Ventura ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] lxd and vagrant
On mié, mar 9, 2016 at 7:55 PM, Sig Lange wrote: Is anyone aware or has anyone started development of a vagrant provider for lxd? not yet, but i found [1] where LXD is used instead of Vagrant (and VirtualBox, but is the same with other plugins) The places i've found [1] for vagrant which mention available plugins do not list lxd as a possible plugin. I do however see vagrant-lxc which works with the original lxc commands, such as lxc-create and friends. vagrant-lxc works well (i used this before) but works only with privileged containers by default (you need sudo thingies). Some code was tested in branches for unprivileged but, don't get merged to production. but well, we can see if some devs are interested in development of lxd plugin for vagrant [1] https://roots.io/linux-containers-lxd-as-an-alternative-to-virtualbox-for-wordpress-development/ Yonsy Solis ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Libpam-cgfs errors
Hi! I got these error messages since the last libpam-cgfs update. Everyhting seems to be working fine on my installations that are very basic installations. PAM-CGFS: Failed to chown /sys/fs/cgroup/freezer//user/root/3 to 0:0: No such file or directory: 1 Time(s) PAM-CGFS: Failed to chown /sys/fs/cgroup/memory//user/root/3 to 0:0: No such file or directory: 2 Time(s) PAM-CGFS: Failed to create a cgroup for user root: 8 Time(s) PAM-CGFS: Failed to enter user cgroup /user/root/1 for user root: 1 Time(s) PAM-CGFS: Failed to enter user cgroup /user/root/3 for user root: 4 Time(s) Thanks! Michel +++= ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Clean output for lxc list
On 10/03/16 18:54, Stéphane Graber wrote: We've had a few folks ask for a --format option of some sort which would allow them to get the info in csv, tabular or json/yaml format. One simple approach could be that if one used the default lxc list with anything other than -c (or --columns) then the current ascii border eye-candy remains intact but as soon as -c is used then the ascii border (and header) is removed and the vertical bars replaced with tabs. The logic being that if someone uses -c then they know what they are looking for and do not need the extra visual support. ie;... ~ lxc list gc3 +--+-++--++---+ | NAME | STATE |IPV4| IPV6 |TYPE| SNAPSHOTS | +--+-++--++---+ | gc3 | RUNNING | 192.168.0.3 (eth0) | | PERSISTENT | 0 | +--+-++--++---+ ~ lxc list gc3 -c ns46tS gc3RUNNING192.168.0.3 (eth0)PERSISTENT0 ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Clean output for lxc list
2016-03-10 5:54 GMT-03:00 Stéphane Graber : > > This would be a good addition but so far nobody stepped up to actually > implement it... > > https://github.com/lxc/lxd/issues/882 > > The problem is, the current output format is all but user friendly. Even `lxc-ls -f ` is better. I know 2.0 is in rc, but please consider something more simple for all the lxd commands. lxc image list is specially hard in eyes: zoolook@venkman:~$ lxc image list ubuntu: +-+--++---+-+--+---+ |ALIAS| FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-+--++---+-+--+---+ | p (11 more) | 4029ee764a39 | yes| ubuntu 12.04 LTS amd64 (release) (20160222) | x86_64 | 155.25MB | Feb 22, 2016 at 12:00am (UTC) | +-+--++---+-+--+---+ | p/20150728 (5 more) | bcbb04aa0e05 | yes| ubuntu 12.04 LTS amd64 (release) (20150728) | x86_64 | 153.72MB | Jul 28, 2015 at 12:00am (UTC) | +-+--++---+-+--+---+ | p/20150819 (5 more) | 567599c74c31 | yes| ubuntu 12.04 LTS amd64 (release) (20150819) | x86_64 | 152.91MB | Aug 19, 2015 at 12:00am (UTC) | +-+--++---+-+--+---+ | p/20150906 (5 more) | 3ea145e2b8b3 | yes| ubuntu 12.04 LTS amd64 (release) (20150906) | x86_64 | 154.69MB | Sep 6, 2015 at 12:00am (UTC) | +-+--++---+-+--+---+ | p/20150930 (5 more) | 9bc955ede4ce | yes| ubuntu 12.04 LTS amd64 (release) (20150930) | x86_64 | 153.86MB | Sep 30, 2015 at 12:00am (UTC) | Thanks! ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Clean output for lxc list
On Thu, Mar 10, 2016 at 03:01:01PM +0700, Fajar A. Nugraha wrote: > On Thu, Mar 10, 2016 at 2:47 PM, Stéphane Graber wrote: > > On Thu, Mar 10, 2016 at 02:37:47PM +0700, Fajar A. Nugraha wrote: > >> On Thu, Mar 10, 2016 at 11:20 AM, Mark Constable wrote: > >> > > >> > I'm not sure if this is already possible but a suggestion for lxc list > >> > would be to provide a "clean" output option without ascii borders. Using > >> > mysql as an example it would be neat if something like this was > >> > possible... > >> > > >> > [[ $(lxc list $HOST -BN -cs) = RUNNING ]] && echo yay || echo nay > > > >> Is there a particular reason you can't work with what's currently > >> available? > >> > >> # lxc-ls -1 --running > >> api > > > > Because he's using LXD, those tools don't work with LXD containers. > > Ooops :) > > "lxc help list" doesn't provide any equivalent of "lxc-ls -1" or > "--running", so the closest equivalent right now would be to parse the > output? Or is there a better way? We've had a few folks ask for a --format option of some sort which would allow them to get the info in csv, tabular or json/yaml format. This would be a good addition but so far nobody stepped up to actually implement it... https://github.com/lxc/lxd/issues/882 > > # lxc list -c ns | awk '$4~/RUNNING/ {print $2}' > xenial > > -- > Fajar > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users -- Stéphane Graber Ubuntu developer http://www.ubuntu.com signature.asc Description: PGP signature ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Clean output for lxc list
On Thu, Mar 10, 2016 at 2:47 PM, Stéphane Graber wrote: > On Thu, Mar 10, 2016 at 02:37:47PM +0700, Fajar A. Nugraha wrote: >> On Thu, Mar 10, 2016 at 11:20 AM, Mark Constable wrote: >> > >> > I'm not sure if this is already possible but a suggestion for lxc list >> > would be to provide a "clean" output option without ascii borders. Using >> > mysql as an example it would be neat if something like this was possible... >> > >> > [[ $(lxc list $HOST -BN -cs) = RUNNING ]] && echo yay || echo nay >> Is there a particular reason you can't work with what's currently available? >> >> # lxc-ls -1 --running >> api > Because he's using LXD, those tools don't work with LXD containers. Ooops :) "lxc help list" doesn't provide any equivalent of "lxc-ls -1" or "--running", so the closest equivalent right now would be to parse the output? Or is there a better way? # lxc list -c ns | awk '$4~/RUNNING/ {print $2}' xenial -- Fajar ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users