Re: [lxc-users] Question about creating a container from an ISO
On Fri, Mar 24, 2017 at 6:44 AM, Michel RENONwrote: > Hi, > > I'm begining with lxc and containers. > > I downloaded an ISO that is an installer. > I already used it to create a vm in virtualbox. > > That ISO is based on debian installer and it adds some telephony > components : xivo.iso. > > Does it access the hardware directly (e.g. T1 PCI card)? If yes, containers (including lxc) might not work. > Now I would like to use it with lxc : > create an empty lxc container and install xivo from the iso. > > My problem is that all documentation I could read only starts with an > existing lxc image, never an empty one. > > Is it possible to create and install a container from an ISO ? > Not directly, no. > If yes, can you point me to some documentation ? > > You can copy root filesystem from existing virtualbox installation. Replace rootfs on an existing lxc container with that, and configure lxc-specific settings (e.g. find configure_debian on https://github.com/lxc/lxc/blob/master/templates/lxc-debian.in) -- Fajar ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Question about creating a container from an ISO
Hi, I'm begining with lxc and containers. I downloaded an ISO that is an installer. I already used it to create a vm in virtualbox. That ISO is based on debian installer and it adds some telephony components : xivo.iso. Now I would like to use it with lxc : create an empty lxc container and install xivo from the iso. My problem is that all documentation I could read only starts with an existing lxc image, never an empty one. Is it possible to create and install a container from an ISO ? If yes, can you point me to some documentation ? Cheers, Michel ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Enabling real time support in containers
We have a need to create real time threads in some of our processes and I've been unable to configure an LXC container to support this. One reference I came across was to set a container's real time bandwidth via the lxc.cgroup.cpu.rt_runtime_us parameter in its config file: lxc.utsname = test01 lxc.include = /var/lib/lxc/centos.conf lxc.network.veth.pair = test01 lxc.network.hwaddr = fe:d6:e8:e2:fa:db xc.rootfs = /var/lib/lxc/test01/rootfs lxc.rootfs.backend = dir lxc.cgroup.cpuset.cpus = 0,1 lxc.cgroup.memory.limit_in_bytes = 2097152000 lxc.cgroup.memory.memsw.limit_in_bytes = 3170893824 lxc.cgroup.cpu.rt_runtime_us = 475000 This container starts up fine if lxc.cgroup.cpu.rt_runtime_us is 0 (zero). Anything other than 0 is rejected, which means real time threads cannot be created in this container. What am I missing to get this to work? I am using lxc version 2.0.6 under CentOS 7.2. The container is being created using a custom CentOS 7.2 image. Thanks for the help. Peter Steele ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] potential bug: /etc/resolv.conf
Hey, the rootfs of several containers on images.linuxcontainers.org contain the following /etc/resolv.conf, even if the resolvconf package is not installed: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 10.0.3.1 search lxc When using lxd with static ip configuration this file might mess up the DNS of containers. I wasn't sure where to report this, but the file is probably created by one of your build scripts in the lxc-ci repository and not cleaned up. This affects at least debian and centos containers. cheers, Tilak ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Mount host filesystem under /sys/class within a container
Hi all, sorry for the very long post. As I said in a previous email I'm trying to play with LXD on top of a Raspberry Pi 3. My goal is to mount the GPIO pseudo filesystem on a container so that I can access the GPIO pins from within the container. I came up with the following conclusions: - if I create a privileged container (I would prefer not to do that anyway) I can see the /sys/class/gpio filesystem as root but I cannot write it. For example, trying a classical "echo 1 > /sys/class/gpio/export, I get "bash: export: Read-only file system" - if I create a non-privileged container I cannot even enter folder /sys/class/gpio as root. I can enter /sys/class but there I see that gpio folder has owner nobody.nogroup and 770 permissions. I solved the issue with a FUSE filesystem running on the Raspberry that mirrors the /sys/class/gpio and mounting such a filesystem under a different path, i.e., /gpio_mnt/sys/class/gpio. This is the script I created: *lxc launch ubuntu:16.04 test1* *MYUID=`sudo ls -l /var/lib/lxd/containers/test1/rootfs/ | grep root | awk '{}{print $3}{}'`* *lxc exec test1 -- addgroup gpio* *lxc exec test1 -- usermod -a -G gpio ubuntu* *MYGID=$(($MYUID + `lxc exec test1 -- sed -nr "s/^gpio:x:([0-9]+):.*/\1/p" /etc/group`))* *sudo mkdir -p /gpio_mnt/test1* *sudo chmod 777 -R /gpio_mnt/* *sudo mkdir -p /gpio_mnt/test1/sys/devices/platform/soc/3f20.gpio* *sudo mkdir -p /gpio_mnt/test1/sys/class/gpio* *sudo chown "$MYUID"."$MYGID" -R /gpio_mnt/test1/sys/* *lxc exec test1 -- mkdir -p /gpio_mnt/sys/class/gpio* *lxc exec test1 -- mkdir -p /gpio_mnt/sys/devices/platform/soc/3f20.gpio* *lxc config device add test1 gpio disk source=/gpio_mnt/test1/sys/class/gpio path=/gpio_mnt/sys/class/gpio* *lxc config device add test1 devices disk source=/gpio_mnt/test1/sys/devices/platform/soc/3f20.gpio path=/gpio_mnt/sys/devices/platform/soc/3f20.gpio* *#This is the mirroring through FUSE filesystem* *cd /home/ubuntu/test_gpio_mirroring/* *sudo node node-folder-mirroring.js /sys/devices/platform/soc/3f20.gpio /gpio_mnt/test1/sys/devices/platform/soc/3f20.gpio -o uid=$MYUID -o gid=$MYGID -o allow_other &> log_devices_test1 &* *sudo node node-folder-mirroring.js /sys/class/gpio /gpio_mnt/test1/sys/class/gpio -o uid=$MYUID -o gid=$MYGID -o allow_other &> log_gpio_test1 &* I would like not to mount under /gpio_mnt/sys/class/gpio but under /sys/class/gpio so that standard Raspberry libraries will work inside the container without any modification but I will still be able to capture the syscalls with the FUSE filesystem mediating the access to GPIO pins. How can I do that? Am I missing something here? Thanks, Francesco -- -- Dr. Francesco Longo, PhD Assistant Professor Dipartimento di Ingegneria Università degli Studi di Messina address: Contrada di Dio (S. Agata), 98166, Messina, Italy email: flo...@unime.it phone: +39 090 3977335 --- fax: +39 090 3977471 -- ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] user 'ubuntu' does not exist within LXD container
On Thu, Mar 23, 2017 at 12:41 PM, Francesco Longowrote: > Hi all, > I'm playing with Raspberry Pi virtualization, i.e., in few words, creating > LXD containers on top of a Raspberry Pi and attaching to them a couple of > FUSE filesystems mirroring the GPIO /sys/class/gpio filesystem. > > I created a simple script that you can find here: > https://github.com/flongo82/raspberry_virtualization/blob/master/launch_virtual_rasp.sh > > A first issue I'm dealing with is that when using lxc exec to add the ubuntu > user to the gpio groups it says that the ubuntu user does not exist. But, if > I login into the container the user is actually there but, of course, it is > not part of the gpio group given that the command has failed. > > This is the output of the script: > > Creating virtual rasp test! > Creating test > Starting test > Adding group `gpio' (GID 1000) ... > Done. > usermod: user 'ubuntu' does not exist > Device gpio added to test > Device devices added to test > > Any idea why this is happening? Is it possible that I need to wait a while > before issuing this kind of lxc exec command after creating the container? You can look into the image at /var/lib/lxd/images/ and you will verify that the "ubuntu" account is not preinstalled in the image. In there you can see that there exist cloud-init templates that do things like creating users. My quick look did not show which template creates the "ubuntu" user, so have a better look in there. "cloud-init" runs after the container is created, therefore it makes sense is the "ubuntu" account is not available exactly after the exit for "lxc launch". Simos > > I'm using LXD version 2.12 on top of a > ubuntu-16.04-preinstalled-server-armhf+raspi3.img.xz image. > > Thanks, > Francesco > > -- > -- > Dr. Francesco Longo, PhD > Assistant Professor > Dipartimento di Ingegneria > Università degli Studi di Messina > address: Contrada di Dio (S. Agata), 98166, Messina, Italy > email: flo...@unime.it > phone: +39 090 3977335 --- fax: +39 090 3977471 > -- > > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] user 'ubuntu' does not exist within LXD container
Hi all, I'm playing with Raspberry Pi virtualization, i.e., in few words, creating LXD containers on top of a Raspberry Pi and attaching to them a couple of FUSE filesystems mirroring the GPIO /sys/class/gpio filesystem. I created a simple script that you can find here: https://github.com/flongo82/raspberry_virtualization/blob/master/launch_virtual_rasp.sh A first issue I'm dealing with is that when using lxc exec to add the ubuntu user to the gpio groups it says that the ubuntu user does not exist. But, if I login into the container the user is actually there but, of course, it is not part of the gpio group given that the command has failed. This is the output of the script: *Creating virtual rasp test!* *Creating test* *Starting test * *Adding group `gpio' (GID 1000) ...* *Done.* *usermod: user 'ubuntu' does not exist* *Device gpio added to test* *Device devices added to test* Any idea why this is happening? Is it possible that I need to wait a while before issuing this kind of lxc exec command after creating the container? I'm using LXD version 2.12 on top of a ubuntu-16.04-preinstalled-server-armhf+raspi3.img.xz image. Thanks, Francesco -- -- Dr. Francesco Longo, PhD Assistant Professor Dipartimento di Ingegneria Università degli Studi di Messina address: Contrada di Dio (S. Agata), 98166, Messina, Italy email: flo...@unime.it phone: +39 090 3977335 --- fax: +39 090 3977471 -- ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users