[lxc-users] lxc exec - is there a way to run command in the background?

2016-01-28 Thread Tomasz Chmielewski
Is there a way to run the command in the background, when running "lxc 
exec"?


It doesn't seem to work for me.

# lxc exec container -- sleep 2h &
[2] 13566
#

[2]+  Stopped lxc exec container -- sleep 2h


This also doesn't work:

# lxc exec container -- "sleep 2h &"
#


Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc exec - is there a way to run command in the background?

2016-01-28 Thread Serge Hallyn
Hm, well at least

nohup lxc exec container -- sleep 2h &

works for me.  I would have expected --mode=non-interactive to
work, but it doesn't.

Quoting Tomasz Chmielewski (man...@wpkg.org):
> Is there a way to run the command in the background, when running
> "lxc exec"?
> 
> It doesn't seem to work for me.
> 
> # lxc exec container -- sleep 2h &
> [2] 13566
> #
> 
> [2]+  Stopped lxc exec container -- sleep 2h
> 
> 
> This also doesn't work:
> 
> # lxc exec container -- "sleep 2h &"
> #
> 
> 
> Tomasz Chmielewski
> http://wpkg.org
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CGManager and LXCFS causing lxc-start to fail for unprivileged containers

2016-01-28 Thread Serge Hallyn
Quoting Akshay Karle (akshay.a.ka...@gmail.com):
> Hello,
> 
> Recently after upgrading lxc on Ubuntu 14.04.3 LTS, I noticed that it
> included the libpam-cgm package. I started to see some weird problems with
> cgroups and ownerships when trying to start an unprivileged container in
> the cases when the user running the containers is not the same as the user
> who logged in to the machine (for eg: ssh, change user and then start
> container fails). I believe this may have to do with the recent changes to
> libpam-cgm, lxcfs and cgfs as I didn't have any trouble before. After
> changing the user we used to unset the XDG envs and run the cgm commands to
> setup cgroups which stopped to work recently.
> 
> *lxc-start failure trace* (full stack trace attached):
>   lxc-start 1454029959.193 ERRORlxc_utils -
> utils.c:setproctitle:1455 - Invalid argument - setting cmdline failed
>   lxc-start 1454029959.581 ERRORlxc_cgfs -
> cgfs.c:handle_cgroup_settings:2091 - Permission denied - failed to set
> memory.use_hierarchy to 1; continuing
>   lxc-start 1454029959.581 ERRORlxc_cgfs -
> cgfs.c:lxc_cgroupfs_create:849 - Could not set clone_children to 1 for
> cpuset hierarchy in parent cgroup.
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/perf_event/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/memory/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/hugetlb/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/freezer/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/devices/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/cpuset/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/cpuacct/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/cpu/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
> - cgroup_rmdir: failed to open /run/lxcfs/controllers/blkio/user/test/0
>   lxc-start 1454029959.581 ERRORlxc_start - start.c:lxc_spawn:970 -
> failed creating cgroups
>   lxc-start 1454029959.581 ERRORlxc_start -
> start.c:__lxc_start:1213 - failed to spawn 'test'
>   lxc-start 1454029965.093 ERRORlxc_start_ui - lxc_start.c:main:344
> - The container failed to start.
> 
> 
> *Steps to reproduce:*
> * Upgrade LXC: $ sudo apt-get upgrade cgmanager libcgmanager0 lxc libcap2
> libseccomp2 ruby-dev lxc-dev
> * Add the management of all controllers to the pam module. Replace the
> freezer in /etc/pam.d/common-session with all controllers:
> session optional pam_cgm.so -c
> freezer,perf_event,memory,cpu,cpuacct,cpuset,blkio,hugetlb,devices

Note, just dropping the '-c freezer' argument also will tell pam_cgm.so
to use all controllers.

The debug info above says lxc is using cgfs and not cgmanager.  Exactly
which lxc package version are you using?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Connecting container to tagged VLAN

2016-01-28 Thread Joshua Schaeffer
On Wed, Jan 27, 2016 at 6:09 PM, Fajar A. Nugraha  wrote:

>
>
>> eth2 already works. I set it up for testing outside of all containers
>> (i.e. on the host only). From the host:
>>
>>
> That doesn't match what you said earlier.
>

It actually does. Remember that this LXC host is a virtual machine running
off of VMware, which makes this whole situation more complex. I'll try to
clarify.

VLAN10, the native vlan, is 192.168.54.0/25. It's my management vlan
VLAN500 is 10.240.78.0/24.

eth1 and eth2 are setup to connect to vlan500 because they were setup that
way through VMware. Normarlly you would be correct, on a physical server
eth2 would only be able to contact the native vlan, because no tagging
information is provided. However VMware allows you to tag a NIC (its
actually called a port group, but it is essentially VMware's way of saying
a NIC) from outside the VM guest. If you do this (as I have) then you don't
(and shouldn't) need to tag anything on the VM guest itself. So by just
looking at the guest it can look incorrect/confusing.

My original problem was I was tagging the port group (a.k.a. VMware's NIC)
and I was tagging eth1 inside the VM guest (a.k.a. the LXC host). Clearly
this causes problems. Because I was tagging eth1 but not eth2 that is where
the problem resided. I was trying to mimic a setup I have in my home lab
where I tag an Ethernet device, add it to a bridge, then use that bridge in
a container, but my home lab uses a physical LXC host. Hopefully I've
explained it in a way that clears this up.

Either way I have that problem resolved. Now I'm just wondering why the
container is not adding the gateway's MAC address when it ARP's for it (as
I explained in my last email).


>
> What I meant, check that ETH1 works on the host. If eth2 is on the same
> network, it might interfere with settings. So disable eth2 first, then test
> eth1 on the host. Without bridging.
>

Okay that makes sense.

Thanks,
Joshua
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Console issues

2016-01-28 Thread Peter Steele
As I've explained in this mailing list before, I create my own custom 
CentOS template that has some history, being initially used as a 
template for KVM based virtual machines, then OpenVZ based containers, 
then libvirt-lxc containers, and now finally we're tackling LXC. One 
issue I've noted is that when I create a container using my custom 
template lxc-console does not work. When I connect I get something like 
this:


# lxc-console -n vm-00
Connected to tty 1
Type  to exit the console,  to enter Ctrl+a itself

and then at this point it basically hangs, just consuming whatever I 
type but not attaching to the console and giving me feedback. If I use 
same config file for the container but use the CentOS download template 
instead, the console works as expected.


In comparing the two cases, I've noticed that when I'm running with the 
downloaded template, I get a set of five getty processes:


# ps aux|grep agetty
root37  0.0  0.0   6452   800 pts/0Ss+  22:45   0:00 
/sbin/agetty --noclear --keep-baud pts/0 115200 38400 9600 vt220
root38  0.0  0.0   6452   796 pts/1Ss+  22:45   0:00 
/sbin/agetty --noclear --keep-baud pts/1 115200 38400 9600 vt220
root40  0.0  0.0   6452   804 pts/2Ss+  22:45   0:00 
/sbin/agetty --noclear --keep-baud pts/2 115200 38400 9600 vt220
root41  0.0  0.0   6452   808 pts/3Ss+  22:45   0:00 
/sbin/agetty --noclear --keep-baud pts/3 115200 38400 9600 vt220
root42  0.0  0.0   6452   796 lxc/console Ss+ 22:45   0:00 
/sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt220


Using my own template, I only get a single console process:

# ps aux|grep agetty
root   279  0.0  0.0   6424   792 lxc/console Ss+ 13:57   0:00 
/sbin/agetty --noclear --keep-baud console 115200 38400 9600


If I go ahead and start the processes manually, then lxc-console appears 
to work as expected. My question, what am I missing in my template that 
causes this behavior, or more specifically, results in the required 
agetty processes failing to start when the container is started?


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Lxc container, add custom route

2016-01-28 Thread Florian Leparoux

Ok, So I will use the configuration inside the CT.

Thank you for your reply.

Regards,

Le 27/01/2016 22:46, Fajar A. Nugraha a écrit :
On Wed, Jan 27, 2016 at 4:55 PM, Florian Leparoux > wrote:


Thank you for your reply

I've created the file and now I'm not able to restart the network
inside the CT


That's an expected result. You're not supposed to run commands like 
"ifdown" or "service network restart" (or anything that involves 
bringing any network interface down) when you set networking inside 
lxc config file.


Choose one:
- setup networking inside lxc config file, OR
- setup networking inside the container OS config files

Don't use both.

--
Fajar


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD default NS mappings

2016-01-28 Thread Tycho Andersen
On Wed, Jan 27, 2016 at 05:09:52PM +0100, david.an...@bli.uzh.ch wrote:
> Hi
> 
> I have noticed that LXD uses some UIDs/GIDs by default I haven't set up and 
> which aren't represented in /etc/sub[ug]id files.
> Interestingly, they are different from instance to instance: one one root is 
> mapped on 1'000'000 (not 100'000), on another it's 265'536.
> Now when I copy the rootfs of a container offline between different LXD or 
> LXC instances according to 
> http://stackoverflow.com/questions/33377916/migrating-lxc-to-lxd doesn't the 
> UID/GID mapping need to be the same?

No, the maps can be different.

> If not, why not?

Because LXD uidshifts the filesystems for you when you do `lxc copy`.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Adding /dev/ppp to container under lxd

2016-01-28 Thread Matt Willsher
Hi,

I'm trying to add /dev/ppp to a container so I can initiate a PPPoE connection 
from inside the container.

lxd is 0.27

I have the following configuration on the container, derived from 
https://github.com/lxc/lxd/blob/master/specs/configuration.md#type-unix-char

config:
  linux.kernel_modules: pppoe
  ppp:
major: "108"
minor: "0"
mode: "0600"
path: /dev/ppp
type: unix-char

The device appears in the container:

crw--- 1 root root 108, 0 Jan 28 10:06 /dev/ppp

Access to /dev/ppp gets denied:

# cat /dev/ppp 
cat: /dev/ppp: Operation not permitted

# ifup pppoe0
Plugin rp-pppoe.so loaded.
Couldn't open the /dev/ppp device: Operation not permitted
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open 
moddep file '/lib/modules/4.2.0-25-generic/modules.dep.bin'
Linux kernel does not support PPPoE -- are you running 2.4.x?
Failed to bring up pppoe0.

Is there some other configuration that needs to be set on the container at 
allow access to /dev/ppp? 

Thanks,
Matt
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] CGManager and LXCFS causing lxc-start to fail for unprivileged containers

2016-01-28 Thread Akshay Karle
Hello,

Recently after upgrading lxc on Ubuntu 14.04.3 LTS, I noticed that it
included the libpam-cgm package. I started to see some weird problems with
cgroups and ownerships when trying to start an unprivileged container in
the cases when the user running the containers is not the same as the user
who logged in to the machine (for eg: ssh, change user and then start
container fails). I believe this may have to do with the recent changes to
libpam-cgm, lxcfs and cgfs as I didn't have any trouble before. After
changing the user we used to unset the XDG envs and run the cgm commands to
setup cgroups which stopped to work recently.

*lxc-start failure trace* (full stack trace attached):
  lxc-start 1454029959.193 ERRORlxc_utils -
utils.c:setproctitle:1455 - Invalid argument - setting cmdline failed
  lxc-start 1454029959.581 ERRORlxc_cgfs -
cgfs.c:handle_cgroup_settings:2091 - Permission denied - failed to set
memory.use_hierarchy to 1; continuing
  lxc-start 1454029959.581 ERRORlxc_cgfs -
cgfs.c:lxc_cgroupfs_create:849 - Could not set clone_children to 1 for
cpuset hierarchy in parent cgroup.
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/perf_event/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/memory/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/hugetlb/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/freezer/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/devices/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/cpuset/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/cpuacct/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/cpu/user/test/0
  lxc-start 1454029959.581 ERRORlxc_cgfs - cgfs.c:cgroup_rmdir:166
- cgroup_rmdir: failed to open /run/lxcfs/controllers/blkio/user/test/0
  lxc-start 1454029959.581 ERRORlxc_start - start.c:lxc_spawn:970 -
failed creating cgroups
  lxc-start 1454029959.581 ERRORlxc_start -
start.c:__lxc_start:1213 - failed to spawn 'test'
  lxc-start 1454029965.093 ERRORlxc_start_ui - lxc_start.c:main:344
- The container failed to start.


*Steps to reproduce:*
* Upgrade LXC: $ sudo apt-get upgrade cgmanager libcgmanager0 lxc libcap2
libseccomp2 ruby-dev lxc-dev
* Add the management of all controllers to the pam module. Replace the
freezer in /etc/pam.d/common-session with all controllers:
session optional pam_cgm.so -c
freezer,perf_event,memory,cpu,cpuacct,cpuset,blkio,hugetlb,devices
* Add a test user : $ sudo useradd test -m
* Setup lxc configuration file for test user:
$ sudo su - test
$ mkdir -p ~/.config/lxc
$ cat > ~/.config/lxc/default.conf
lxc.include = /etc/lxc/default.conf
# you may have to change to your subuids/subgids
lxc.id_map = u 0 231072 65536
lxc.id_map = g 0 231072 65536
* Create container: $ lxc-create -n test -t download -- -d ubuntu -r trusty
-a amd64
* Run the container: $ lxc-start -n test -d -l debug -o container.log

*System info:*
$ uname -r
3.13.0-76-generic

$ dpkg -l | grep lxc
ii  liblxc1  1.1.5-0ubuntu5~ubuntu14.04.1~ppa1
amd64Linux Containers userspace tools (library)
ii  lxc  1.1.5-0ubuntu5~ubuntu14.04.1~ppa1
amd64Linux Containers userspace tools
ii  lxc-dev  1.1.5-0ubuntu5~ubuntu14.04.1~ppa1
amd64Linux Containers userspace tools (development)
ii  lxc-templates1.1.5-0ubuntu5~ubuntu14.04.1~ppa1
amd64Linux Containers userspace tools (templates)
ii  lxcfs0.17-0ubuntu2~ubuntu14.04.1~ppa1
 amd64FUSE based filesystem for LXC
ii  python3-lxc  1.1.5-0ubuntu5~ubuntu14.04.1~ppa1
amd64Linux Containers userspace tools (Python 3.x bindings)

$ dpkg -l | grep cgm
ii  cgmanager0.39-2ubuntu5~ubuntu14.04.1~ppa1
 amd64Central cgroup manager daemon
ii  libcgmanager0:amd64  0.39-2ubuntu5~ubuntu14.04.1~ppa1
 amd64Central cgroup manager daemon (client library)
ii  libpam-cgm   0.39-2ubuntu5~ubuntu14.04.1~ppa1
 amd64Central cgroup manager daemon (PAM module)

I would appreciate some help on this as I have been trying to figure out
the problem for the last few days now.


cli.log
Description: Binary data

Re: [lxc-users] LXC not responsive after update

2016-01-28 Thread Viktor Trojanovic


On 28.01.2016 14:33, Andrey Repin wrote:

Greetings, Viktor Trojanovic!


Hi Bostjan,
I sent a reply with an attachment a week ago but it still was not
approved.

The reply came through to the list just fine.
https://lists.linuxcontainers.org/pipermail/lxc-users/2016-January/010862.html



Hi Andrey,

OK, my mistake, didn't see it..

I still hope anyone can make sense of my issue. The container still is 
running without problems and I don't have any immediate need to 
administer it but I'm worried what would happen in case of an 
(unintentional) reboot. Even if it would boot fine, I'd still like to 
understand where this issue is coming from and how it can be solved 
without rebooting.


Viktor
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users