Re: [lxc-users] 4.0.6 regression: /proc/sys/net/ipv4/ip_forward: Read-only file system

2021-02-04 Thread Harald Dunkel

On 2/4/21 3:32 PM, Harald Dunkel wrote:


How comes it worked before? Hopefully I am not too blind to see,
but the git log doesn't tell that this has been changed.



PS: I found

af9dd246df7c99740f153682e0eb427f1426693d
unmounted proc/sys/net if dropping CAP_NET_ADMIN

apparently introducing the problem for 4.0.6, and

952ab618268b4af2773ed9d8fade817363c28a5c
conf: fix CAP_NET_ADMIN-based mount handling

563ec46266b8967f0ee60e0032bbe66b3b37207c
conf: fix containers retaining CAP_NET_ADMIN

providing the fix (hopefully). Did I miss other related fixes?

Since breaking /proc is a very serious problem I wonder if it would
be reasonable to do an early release lxc 4.0.7, including these fixes?


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] 4.0.6 regression: /proc/sys/net/ipv4/ip_forward: Read-only file system

2021-02-04 Thread Harald Dunkel

Hi folks,

since I moved from lxc 4.0.4 to 4.0.6 I get

# echo 0 >/proc/sys/net/ipv4/ip_forward
bash: /proc/sys/net/ipv4/ip_forward: Read-only file system

in the container. The man page says

   lxc.mount.auto
  specify which standard kernel file systems should be
  automatically mounted. This may dramatically  simplify
  the configuration. The file systems are:

  o proc:mixed  (or proc): mount /proc as read-write, but
remount /proc/sys and /proc/sysrq-trigger read-only
for security / container isolation purposes.

  o proc:rw: mount /proc as read-write

How comes it worked before? Hopefully I am not too blind to see,
but the git log doesn't tell that this has been changed.


Every indication of wisdom and knowledge shown here is highly
appreciated

Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXC Memory LImits

2020-11-09 Thread Harald Dunkel


On 11/4/20 11:30 AM, Atif Ghaffar wrote:


I find this document useful for resource limits.

https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/ 




Very helpful indeed, but since we are at lxd 4.7 now I wonder
if this blog series could be updated?


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxc stop ignored

2020-09-04 Thread Harald Dunkel

Hi Aleksandar,

I've found some old README for configuring RedHat LXC containers:

# create an upstart handler for SIGPWR
cat 

Re: [lxc-users] ghost services on LXC containers

2020-08-13 Thread Harald Dunkel

On 8/13/20 12:32 PM, Fajar A. Nugraha wrote:

Try (two times, once inside the container, once inside the host):
- cat /proc/self/cgroup
- ls -la /proc/self/ns


On the host:

root@il08:~# cat /proc/self/cgroup
13:name=systemd:/
12:rdma:/
11:pids:/
10:perf_event:/
9:net_prio:/
8:net_cls:/
7:memory:/
6:freezer:/
5:devices:/
4:cpuset:/
3:cpuacct:/
2:cpu:/
1:blkio:/
0::/
root@il08:~# ls -la /proc/self/ns
total 0
dr-x--x--x 2 root root 0 Aug 13 12:40 .
dr-xr-xr-x 9 root root 0 Aug 13 12:40 ..
lrwxrwxrwx 1 root root 0 Aug 13 12:40 cgroup -> 'cgroup:[4026531835]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 ipc -> 'ipc:[4026531839]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 mnt -> 'mnt:[4026531840]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 net -> 'net:[4026531992]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 pid -> 'pid:[4026531836]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 pid_for_children -> 'pid:[4026531836]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 time -> 'time:[4026531834]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 time_for_children -> 'time:[4026531834]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 user -> 'user:[4026531837]'
lrwxrwxrwx 1 root root 0 Aug 13 12:40 uts -> 'uts:[4026531838]'


Entering the container:

root@il08:~# lxc-attach -n il02
root@il02:~# cat /proc/self/cgroup
13:name=systemd:/
12:rdma:/
11:pids:/
10:perf_event:/
9:net_prio:/
8:net_cls:/
7:memory:/
6:freezer:/
5:devices:/
4:cpuset:/
3:cpuacct:/
2:cpu:/
1:blkio:/
0::/
root@il02:~# ls -la /proc/self/ns
total 0
dr-x--x--x 2 root root 0 Aug 13 12:42 .
dr-xr-xr-x 9 root root 0 Aug 13 12:42 ..
lrwxrwxrwx 1 root root 0 Aug 13 12:42 cgroup -> 'cgroup:[4026532376]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 ipc -> 'ipc:[4026532313]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 mnt -> 'mnt:[4026532311]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 net -> 'net:[4026532316]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 pid -> 'pid:[4026532314]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 pid_for_children -> 'pid:[4026532314]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 time -> 'time:[4026531834]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 time_for_children -> 'time:[4026531834]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 user -> 'user:[4026531837]'
lrwxrwxrwx 1 root root 0 Aug 13 12:42 uts -> 'uts:[4026532312]'


I am not sure what this is trying to tell me, though. Is this the same
hierarchy? And would you agree that this is really a bad thing to do?

Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] ghost services on LXC containers

2020-08-13 Thread Harald Dunkel

On 8/13/20 9:02 AM, Harald Dunkel wrote:


# cat /sys/fs/cgroup/unified/system.slice/zabbix-agent.service/cgroup.procs
0
0
0
0
0
0


PID 0 is not valid here, AFAICT. And zabbix-agent isn't even installed
in my container. Its installed on the host only.



PS:
Lennart Pottering wrote about this:

Is it possible the container and the host run in the very same cgroup
hierarchy?

If that's the case (and it looks like it): this is not
supported. Please file a bug against LXC, it's very clearly broken.

(https://lists.freedesktop.org/archives/systemd-devel/2020-August/045022.html)


I would be highly interested in your thoughts about this.

Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] ghost services on LXC containers

2020-08-13 Thread Harald Dunkel

Hi folks,

using Debian 10 and lxc 4.0.2 (or 4.0.4) I found ghost services in my
containers. Sample:

# cat /sys/fs/cgroup/unified/system.slice/cron.service/cgroup.procs
50
0

# cat /sys/fs/cgroup/unified/system.slice/dbus.service/cgroup.procs
48
0

# cat /sys/fs/cgroup/unified/system.slice/zabbix-agent.service/cgroup.procs
0
0
0
0
0
0


PID 0 is not valid here, AFAICT. And zabbix-agent isn't even installed
in my container. Its installed on the host only.

Can anybody reproduce this? See also

https://lists.freedesktop.org/archives/systemd-devel/2020-August/044999.html
https://bugs.debian.org/968049


Every insightful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] I/O error for logrotate in LXC

2020-06-17 Thread Harald Dunkel

Hi folks,

on some LXC containers I get I/O errors for the logrotate
service (systemd):

root@il04:~# systemctl status logrotate
* logrotate.service - Rotate log files
   Loaded: loaded (/lib/systemd/system/logrotate.service; static; vendor 
preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-06-17 00:00:11 CEST; 15h 
ago
 Docs: man:logrotate(8)
   man:logrotate.conf(5)
  Process: 359762 ExecStart=/usr/sbin/logrotate /etc/logrotate.conf 
(code=exited, status=1/FAILURE)
 Main PID: 359762 (code=exited, status=1/FAILURE)

Jun 17 00:00:00 il04.ac.aixigo.de systemd[1]: Starting Rotate log files...
Jun 17 00:00:11 il04.ac.aixigo.de logrotate[359762]: Failed to kill unit 
rsyslog.service: Input/output error
Jun 17 00:00:11 il04.ac.aixigo.de logrotate[359762]: error: error running 
non-shared postrotate script for /var/log/syslog of '/var/log/syslog
Jun 17 00:00:11 il04.ac.aixigo.de logrotate[359762]: '
Jun 17 00:00:11 il04.ac.aixigo.de systemd[1]: logrotate.service: Main process 
exited, code=exited, status=1/FAILURE
Jun 17 00:00:11 il04.ac.aixigo.de systemd[1]: logrotate.service: Failed with 
result 'exit-code'.
Jun 17 00:00:11 il04.ac.aixigo.de systemd[1]: Failed to start Rotate log files.

root@il04:~# ps -ef | grep rsyslog
root  146014   1  0 Jun08 ?00:00:06 /usr/sbin/rsyslogd -n -iNONE
root  377934  377898  0 15:09 pts/700:00:00 grep rsyslog
root@il04:~# kill -HUP 146014
root@il04:~# echo $?
0


Host is Debian 10, LXC is version 4.0.2.
The client runs Debian 10 as well.

This doesn't happen on all LXC clients and hosts, but I haven't
detected a scheme yet. On my "real" hosts there are no I/O errors,
AFAICT.


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] lxc.kmsg needed for official version 3.0

2020-03-27 Thread Harald Dunkel

Hi folks,

apparently there is some text about lxc.kmsg commented out in
doc/ko/lxc.container.conf.sgml.in. What happened to this feature?

AFAICT lxc.kmsg = 1 is needed to run kubelet in lxc. Currently
I am using

# special settings for rke
lxc.cgroup.devices.allow = a
lxc.mount.auto = proc:rw sys:rw
lxc.cap.drop =

in my config files, but this is not sufficient. kubelet dies with

kubelet.go:1413] failed to start OOM watcher open /dev/kmsg: no such file or 
directory


Every insightful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] java11 vs memory.limit_in_bytes

2020-02-29 Thread Harald Dunkel

Some new findings: strace shows that java reads

/sys/fs/cgroup/memory/user.slice/user-0.slice/session-c254.scope/memory.limit_in_bytes

instead of

/sys/fs/cgroup/memory/memory.limit_in_bytes


root@debian10:~# cat 
/sys/fs/cgroup/memory/user.slice/user-0.slice/session-c254.scope/memory.limit_in_bytes
9223372036854771712
root@debian10:~# cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4294967296


user.slice and system.slice are part of systemd, AFAIK. Is systemd
to blame here, at least for spreading confusion?


Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] java11 vs memory.limit_in_bytes

2020-02-28 Thread Harald Dunkel

Hi folks,

according to some notes on the net (e.g. [1]) openjdk11 is aware of
the container limits for calculating default heap size and some
other internal parameters. Apparently this works very well for
Docker. Sample:

# docker run -ti --rm --cpus 2 -m 4G debian
root@c526096eb86e:/# apt update; apt install -y default-jdk
:
:
done.
done.
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.38.1+dfsg-1) ...
root@c526096eb86e:/# cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4294967296
root@c526096eb86e:/# cat /sys/fs/cgroup/memory/memory.memsw.limit_in_bytes
8589934592
root@c526096eb86e:/# java -XX:MaxRAMPercentage=20.0 -XX:MinRAMPercentage=10.0 
-XX:+PrintFlagsFinal -version | egrep MaxHeap\|RAMPercentage\|Container
   double InitialRAMPercentage = 1.562500   
   {product} {default}
uintx MaxHeapFreeRatio = 70 
{manageable} {default}
   size_t MaxHeapSize  = 859832320  
   {product} {ergonomic}
   double MaxRAMPercentage = 20.00  
   {product} {command line}
   double MinRAMPercentage = 10.00  
   {product} {command line}
 bool PreferContainerQuotaForCPUCount  = true   
   {product} {default}
 bool UseContainerSupport  = true   
   {product} {default}
openjdk version "11.0.6" 2020-01-14
OpenJDK Runtime Environment (build 11.0.6+10-post-Debian-1deb10u1)
OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Debian-1deb10u1, mixed mode, 
sharing)


Check the MaxHeapSize. 20% of 4 GByte, as requested.

For an LXC container running Debian and the same OpenJDK I get:


root@debian10:~# cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4294967296
root@debian10:~# cat /sys/fs/cgroup/memory/memory.memsw.limit_in_bytes
4294967296
root@debian10:~# java -XX:MaxRAMPercentage=20.0 -XX:MinRAMPercentage=10.0 
-XX:+PrintFlagsFinal -version | egrep MaxHeap\|RAMPercentage\|Container
   double InitialRAMPercentage = 1.562500   
   {product} {default}
uintx MaxHeapFreeRatio = 70 
{manageable} {default}
   size_t MaxHeapSize  = 13511950336
   {product} {ergonomic}
   double MaxRAMPercentage = 20.00  
   {product} {command line}
   double MinRAMPercentage = 10.00  
   {product} {command line}
 bool PreferContainerQuotaForCPUCount  = true   
   {product} {default}
 bool UseContainerSupport  = true   
   {product} {default}
openjdk version "11.0.6" 2020-01-14
OpenJDK Runtime Environment (build 11.0.6+10-post-Debian-1deb10u1)
OpenJDK 64-Bit Server VM (build 11.0.6+10-post-Debian-1deb10u1, mixed mode, 
sharing)


memory.limit_in_bytes is 4 GByte, but Java gives me a MaxHeapSize of
0.20 * 64GByte.

Of course I don't want to blame you for bugs in OpenJDK, but I wonder
how comes? Are there some mysterious params set in a Docker container,
but missing in LXC?


Every insightful comment is highly appreciated
Harri


1: 
https://stackoverflow.com/questions/54292282/clarification-of-meaning-new-jvm-memory-parameters-initialrampercentage-and-minr/54297753#54297753
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Help needed: lxc unpriv. containers and debian buster sysvinit

2020-02-26 Thread Harald Dunkel

On 2020-02-24 14:34, mlftp@pep.foundation wrote:


It might suffice to just mount the cgroups all together under
/sys/fs/cgroup/all instead of /sys/fs/cgroup.


Yes, and with debian it works with the cgroupfs-mount package.



Do you need cgmanager as well? Is the functionality of
cgroupfs-mount included in cgmanager? Are they compatible
to systemd?

I think there is a lot of confusion about cgmanager, cgroupfs-
mount, cgroup-tools, libpam-cgroup and systemd's cgroup support/
requirements, not to mention cgroup vs cgroup2.

Every insightful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] new tag 3.2.2?

2019-12-23 Thread Harald Dunkel

Hi folks,

I see big progress on the lxc master branch every day, but I wonder if
there is a schedule for a tagged version 3.2.2? Something that could
be used in production?


Regards and best season's greetings

Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] 10min lxd shutdown penalty

2019-10-31 Thread Harald Dunkel

Hi folks,

apparently lxd doesn't properly terminate at shutdown/reboot time,
even though there are no containers installed. The shutdown
procedure is delayed for 10 minutes. Last words:

A stop job is running for Service for snap application lxd.daemon

This is *highly* painful.

Platform is Debian 10. Is there a similar delay on Ubuntu 19.10
or others? Any reasonable way to avoid the 10 minute penalty for
rebooting? Reducing the timeout would be considered as cheating.
;-)

Of course I found https://github.com/lxc/lxd/issues/4277, but the
issue was not resolved. Today snap is the only supported way to
install lxd binaries, AFAIU, so I would highly appreciate any helpful
comment.

Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] suspicious output for "lxc profile device add --help"

2019-10-24 Thread Harald Dunkel

Hi folks,

this looks weird:


# lxc profile device add --help
Description:
  Add devices to containers or profiles

Usage:
  lxc profile device add [:]   
[key=value...] [flags]

Examples:
  lxc config device add [:]container1  disk 
source=/share/c1 path=opt
  Will mount the host's /share/c1 onto /opt in the container.
:
:


Note the "lxc config" instead of "lxc profile". Probably copy
and then forgotten.

lxd is version 3.18


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] minimum permissions to run Docker in LXC?

2019-09-16 Thread Harald Dunkel

Hi folks,

I found https://github.com/lxc/lxd/issues/4902, but it seems
to be overly permissive to run Docker in LXC (IMHO).

Has anybody tried less open permissions than

lxc.cgroup.devices.allow = a
lxc.mount.auto = proc:rw sys:rw
lxc.cap.drop =

What are your suggestions?


Every helpful comment is highly appreciated.
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-02-26 Thread Harald Dunkel

On 2/25/19 11:20 AM, Stéphane Graber wrote:
> snapd + LXD work fine on CentOS 7, it's even in our CI environment, so
> presumably the same steps should work on RHEL 7.
>
Apparently it doesn't work that fine:

[root@centos7 ~]# yum install snapd
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: ftp.halifax.rwth-aachen.de
 * extras: ftp.halifax.rwth-aachen.de
 * updates: mirror.infonline.de
No package snapd available.
Error: Nothing to do

Of course I found some howtos on the net (e.g.
https://computingforgeeks.com/install-snapd-snap-applications-centos-7/),
but thats not the point. The point is to integrate LXD without 3rd-party
tools that are difficult to find and install on their own.

Surely I don't blame you for the not-invented-here approach of others, but
LXD appears to be difficult to build or integrate, even on native Debian.


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-02-25 Thread Harald Dunkel

On 2/25/19 4:52 AM, Fajar A. Nugraha wrote:


snapcraft.io  is also owned by Canonical.

By using lxd snap, they can easly have lxd running on any distro that already 
support snaps, without having to maintain separate packages.



The problem is that there is no standard for all "major" distros,
as this discussion shows:

https://www.reddit.com/r/redhat/comments/9lbm0c/snapd_for_rhel/

Debian already has an excellent packaging scheme. The RPM world
doesn't follow snapd, as it seems. And if you prefer your favorite
tool inside a container you can find docker images everywhere.

A few years ago compatibility was achieved on source code level.
Sorry to say, but you lost that for lxd. And snaps are not a
replacement.


Just my own $0.02. Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] lxc-checkconfig improvement?

2018-12-14 Thread Harald Dunkel

Hi folks,

lxc-checkconfig tells me

:
--- Misc ---
Veth pair device: enabled, loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: enabled, loaded
CONFIG_NF_NAT_IPV6: enabled, loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded
:

It would help alot, if lxc-checkconfig could show the kernel
module names as well. Makes it easier to edit /etc/modules.


Just a suggestion, of course. Keep on your good work.


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXC 3.0: Removal of cgmanager And cgfs cgroup Drivers

2018-02-20 Thread Harald Dunkel

Does this mean that lxc 3.0 is systemd-only?

Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-02-06 Thread Harald Dunkel

On 02/02/18 11:53, Stéphane Graber wrote:


lxcfs is used for both privileged and unprivileged containers, without
it you'd see the host uptime, host set of CPUs, host memory, ...



Wouldn't you agree that this is cgroup stuff and should be provided
by the kernel, similar to /proc/mounts and others? Using Fuse here is
just asking for troubles (IMHO).

I won't install it again, but AFAICR lxcfs gave me wrong (future)
stimes in the output of "ps -ef". The containers without lxcfs were
fine. Is this a known issue?


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-02-02 Thread Harald Dunkel

Hi Stéphane,

On 01/30/18 17:17, Stéphane Graber wrote:


Yeah, there's effectively no way to re-inject those mounts inside a
running container.

So you're going to need to restart those containers.
Until then, you can "umount" the various lxcfs files from within the
container so that rather than a complete failure to access those files,
you just get the non-namespaced version of the file.



AFAICS lxcfs is useful only for unprivileged containers. All my affected
containers were privileged. I didn't ask for lxcfs, but it was used
automatically, so I wonder how I can forbid lxcfs to be used for these
containers? Do I have to deinstall lxcfs completely?


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-01-30 Thread Harald Dunkel
On 01/30/18 18:24, Harald Dunkel wrote:
> On 01/30/18 17:17, Stéphane Graber wrote:
>>
>> So you're going to need to restart those containers.
>> Until then, you can "umount" the various lxcfs files from within the
>> container so that rather than a complete failure to access those files,
>> you just get the non-namespaced version of the file.
>>
> 
> Thats more than I hoped for. It allows me to do a clean shutdown.
> Very helpful response.
> 

PS: I could umount most lxcfs items, but it seems systemd keeps /proc/swaps
busy. lxc-stop complained for these containers

lxc-stop 20180130193610.338 ERRORlxc_commands_utils - 
commands_utils.c:lxc_cmd_sock_rcv_state:71 - failed to receive message: 
Resource temporarily unavailable
lxc-stop 20180130193610.544 ERRORlxc_commands - 
commands.c:lxc_cmd_rsp_recv:157 - Command stop response data 1785884787 too 
long.

The containers using sysvinit were fine (no flamewar, please).


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-01-30 Thread Harald Dunkel
On 01/30/18 17:17, Stéphane Graber wrote:
> 
> So you're going to need to restart those containers.
> Until then, you can "umount" the various lxcfs files from within the
> container so that rather than a complete failure to access those files,
> you just get the non-namespaced version of the file.
> 

Thats more than I hoped for. It allows me to do a clean shutdown.
Very helpful response.


Thanx very much
Harri



signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxcfs removed by accident, how to recover?

2018-01-30 Thread Harald Dunkel

Hi folks,

I have removed the lxcfs package by accident, while the containers
are still running. Now ps in the containers gives me

# ps -ef
Error: /proc must be mounted
  To mount /proc at boot you need an /etc/fstab line like:
  proc   /proc   procdefaults
  In the meantime, run "mount proc /proc -t proc"

ls -l /proc gives me a some lines with ???.

Is there some way to recover without restaring the containers?
I tried to mount and remount /proc on a test system, with
and without lxcfs reinstalled.


This is lxc 2.0.9 and lxcfs 2.0.7 on Stretch.


Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Hint for CentOS 7 guests in Debian stretch with KAISER/KPTI kernel

2018-01-20 Thread Harald Dunkel
On 01/11/18 17:19, Christoph Lechleitner wrote:
> Hi everybody!
> 
> After this cost me an afternoon I thought I should share the solution
> here ;-)
> 
> We are running multiple LXC hosts with Debian jessie resp. stretch,
> using sysv-init over systemd in the host system.
> 
> 99% of the guest systems are Debian too, but we also have guests with
> CentOS 6 and 7 (one each) for development.
> 
> After upgrading the host system from Debian Jessie (with kernel 4.0.x
> from jessie-backports) to Debian stretch with kernel 4.9.65-3+deb9u2
> (includes KAISER patches AKA KPTI against meltdown), our CentOS 7 guest
> were half broken.
> 

I have a similar setup. My suggestion:

If systemd is not installed on the host, then you should consider to
install the cgmanager package, together with a backport of lxc 2.0.9.
I cannot recommend to add cgroup to your /etc/fstab.


Hope this helps
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc 2.0: howto inherit ulimits from the host?

2018-01-18 Thread Harald Dunkel

Hi folks,

I am running lxc 2.0.9 on Stretch. The (privileged) container
runs Oracle Linux 7.4. Problem: I get some very restricted
ulimits in the container (e.g. nofile hard 8192), even though
the limits for root and "*" on the host are set to much higher
values. On the host the limits are fine.

If I set the expected limits in lxc1:/etc/security/limits.d/\
local.conf, then ssh to this container fails. ssh just says
"Connection closed", exit value is 254. So apparently setting
the limits in the container is not an option.

Is there some way to get around this mess? I saw that lxc 2.1
provides new lxc.prlimit config options, but AFAIU *privileged*
containers should inherit the limits and shoud be fine with a
local limits.conf.

???


Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] debian template: cannot install openjdk-8-jre due to missing /proc

2018-01-08 Thread Harald Dunkel

On 12/18/17 15:53, Benjamin Asbach wrote:

For me it looks like a bug. I guess a github issue would be the right place to 
do the evaluation on that problem.



Found it, it seems to be https://github.com/lxc/lxc/issues/384

Thanx for your pointer
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] debian template: cannot install openjdk-8-jre due to missing /proc

2017-12-31 Thread Harald Dunkel

Hi folks,

if I try to add openjdk-8-jre on the lxc-create command line,
then it woes about missing /proc file system:

# lxc-create -t debian -n sample01 -- -r stretch --packages=openjdk-8-jre
:
:
Setting up ca-certificates-java (20170531+nmu1) ...
the keytool command requires a mounted proc fs (/proc).
dpkg: error processing package ca-certificates-java (--configure):
  subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of openjdk-8-jre-headless:amd64:
  openjdk-8-jre-headless:amd64 depends on ca-certificates-java; however:
   Package ca-certificates-java is not configured yet.
:
:
the keytool command requires a mounted proc fs (/proc).
E: /etc/ca-certificates/update.d/jks-keystore exited with code 1.
done.
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.36.5-2+deb9u1) ...
Errors were encountered while processing:
  ca-certificates-java
  openjdk-8-jre-headless:amd64
  openjdk-8-jre:amd64
W: --force-yes is deprecated, use one of the options starting with --allow 
instead.
E: Sub-process /usr/bin/dpkg returned an error code (1)


Full log is attached.

I saw this mentioned before, but obviously its still unresolved.
Not being able to install Java is *highly* painful. Shouldn't there
be some policy-rc.d script installed to tell Debian's postinst
scripts to avoid this problem?

lxc is version 2.0.9, plattform is Stretch.


Every helpful comment is highly appreciated
Harri

debootstrap is /usr/sbin/debootstrap
Checking cache download in /var/cache/lxc/debian/rootfs-stretch-amd64 ... 
Downloading debian minimal ...
I: Retrieving InRelease 
I: Retrieving Release 
I: Retrieving Release.gpg 
I: Checking Release signature
I: Valid Release signature (key id 067E3C456BAE240ACEE88F6FEF0F382A1A7B6500)
I: Retrieving Packages 
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Found additional required dependencies: libaudit-common libaudit1 libbz2-1.0 
libcap-ng0 libdb5.3 libdebconfclient0 libgcrypt20 libgpg-error0 liblz4-1 
libncursesw5 libsemanage-common libsemanage1 libsystemd0 libudev1 libustr-1.0-1 
I: Found additional base dependencies: adduser debian-archive-keyring dmsetup 
gpgv iproute2 libapparmor1 libapt-pkg5.0 libbsd0 libc-l10n libcap2 
libcryptsetup4 libdevmapper1.02.1 libdns-export162 libedit2 libelf1 
libgssapi-krb5-2 libidn11 libip4tc0 libisc-export160 libk5crypto3 libkeyutils1 
libkmod2 libkrb5-3 libkrb5support0 libmnl0 libncurses5 libprocps6 libseccomp2 
libssl1.0.2 libstdc++6 libwrap0 openssh-client openssh-sftp-server procps 
systemd systemd-sysv ucf 
I: Checking component main on http://deb.debian.org/debian...
I: Retrieving libacl1 2.2.52-3+b1
I: Validating libacl1 2.2.52-3+b1
I: Retrieving adduser 3.115
I: Validating adduser 3.115
I: Retrieving libapparmor1 2.11.0-3
I: Validating libapparmor1 2.11.0-3
I: Retrieving apt 1.4.8
I: Validating apt 1.4.8
I: Retrieving libapt-pkg5.0 1.4.8
I: Validating libapt-pkg5.0 1.4.8
I: Retrieving libattr1 1:2.4.47-2+b2
I: Validating libattr1 1:2.4.47-2+b2
I: Retrieving libaudit-common 1:2.6.7-2
I: Validating libaudit-common 1:2.6.7-2
I: Retrieving libaudit1 1:2.6.7-2
I: Validating libaudit1 1:2.6.7-2
I: Retrieving base-files 9.9+deb9u3
I: Validating base-files 9.9+deb9u3
I: Retrieving base-passwd 3.5.43
I: Validating base-passwd 3.5.43
I: Retrieving bash 4.4-5
I: Validating bash 4.4-5
I: Retrieving libdns-export162 1:9.10.3.dfsg.P4-12.3+deb9u3
I: Validating libdns-export162 1:9.10.3.dfsg.P4-12.3+deb9u3
I: Retrieving libisc-export160 1:9.10.3.dfsg.P4-12.3+deb9u3
I: Validating libisc-export160 1:9.10.3.dfsg.P4-12.3+deb9u3
I: Retrieving libbz2-1.0 1.0.6-8.1
I: Validating libbz2-1.0 1.0.6-8.1
I: Retrieving libdebconfclient0 0.227
I: Validating libdebconfclient0 0.227
I: Retrieving coreutils 8.26-3
I: Validating coreutils 8.26-3
I: Retrieving libcryptsetup4 2:1.7.3-4
I: Validating libcryptsetup4 2:1.7.3-4
I: Retrieving dash 0.5.8-2.4
I: Validating dash 0.5.8-2.4
I: Retrieving libdb5.3 5.3.28-12+deb9u1
I: Validating libdb5.3 5.3.28-12+deb9u1
I: Retrieving debconf 1.5.61
I: Validating debconf 1.5.61
I: Retrieving debian-archive-keyring 2017.5
I: Validating debian-archive-keyring 2017.5
I: Retrieving debianutils 4.8.1.1
I: Validating debianutils 4.8.1.1
I: Retrieving dialog 1.3-20160828-2
I: Validating dialog 1.3-20160828-2
I: Retrieving diffutils 1:3.5-3
I: Validating diffutils 1:3.5-3
I: Retrieving dpkg 1.18.24
I: Validating dpkg 1.18.24
I: Retrieving e2fslibs 1.43.4-2
I: Validating e2fslibs 1.43.4-2
I: Retrieving e2fsprogs 1.43.4-2
I: Validating e2fsprogs 1.43.4-2
I: Retrieving libcomerr2 1.43.4-2
I: Validating libcomerr2 1.43.4-2
I: Retrieving libss2 1.43.4-2
I: Validating libss2 1.43.4-2
I: Retrieving libelf1 0.168-1
I: Validating libelf1 0.168-1
I: Retrieving findutils 4.6.0+git+20161106-2
I: Validating findutils 4.6.0+git+20161106-2
I: Retrieving gcc-6-base 6.3.0-18
I: Validating gcc-6-base 6.3.0-18
I: Retrieving 

[lxc-users] debian template: cannot install openjdk-8-jre due to missing /proc

2017-12-18 Thread Harald Dunkel

Hi folks,

if I try to add openjdk-8-jre on the lxc-create command line,
then it woes about missing /proc file system:

# lxc-create -t debian -n sample01 -- -r stretch --packages=openjdk-8-jre
:
:
Setting up ca-certificates-java (20170531+nmu1) ...
the keytool command requires a mounted proc fs (/proc).
dpkg: error processing package ca-certificates-java (--configure):
   subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of openjdk-8-jre-headless:amd64:
   openjdk-8-jre-headless:amd64 depends on ca-certificates-java; however:
Package ca-certificates-java is not configured yet.
:
:
the keytool command requires a mounted proc fs (/proc).
E: /etc/ca-certificates/update.d/jks-keystore exited with code 1.
done.
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.36.5-2+deb9u1) ...
Errors were encountered while processing:
   ca-certificates-java
   openjdk-8-jre-headless:amd64
   openjdk-8-jre:amd64
W: --force-yes is deprecated, use one of the options starting with --allow 
instead.
E: Sub-process /usr/bin/dpkg returned an error code (1)


Full log is attached.

I saw this problem mentioned before, but obviously its still
unresolved. Not being able to install Java is *highly* painful.
Would it be possible to add a policy-rc.d script telling Debian's
postinst scripts to avoid this problem?

lxc is version 2.0.9, plattform is Stretch.


Every helpful comment is highly appreciated
Harri


sample01.txt.gz
Description: application/gzip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Race condition in IPv6 network configuration

2017-11-09 Thread Harald Dunkel
I would suggest to configure the network in /config 
(including the default route) before the container is even 
started. IPv4 and IPv6.

Your /etc/network/interfaces should be empty. /etc/resolv.conf 
has to be setup accordingly.


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] is memory.limit_in_bytes inherited by nested cgroups?

2017-07-21 Thread Harald Dunkel
Hi folks,

I have to restrict lxc.cgroup.memory.limit_in_bytes to 16GByte
for the containers. Problem: New systems based on Stretch show 

% for i in $(find /sys/fs/cgroup/memory/lxc/lxc1 -name memory.limit_in_bytes); 
do \
echo $i $(cat $i) \
done | column -t
/sys/fs/cgroup/memory/lxc/lxc1/memory.limit_in_bytes   17179869184
/sys/fs/cgroup/memory/lxc/lxc1/user.slice/memory.limit_in_bytes
9223372036854771712
/sys/fs/cgroup/memory/lxc/lxc1/init.scope/memory.limit_in_bytes
9223372036854771712
/sys/fs/cgroup/memory/lxc/lxc1/system.slice/memory.limit_in_bytes  
9223372036854771712


Does this mean that the nested memory cgroups are not 
restricted? I had hoped there is some inheritance in place,
making it impossible for the containers to override memory
restrictions. Does it?


Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc 2.0.7: sysvinit on the host breaks systemd based containers

2017-05-18 Thread Harald Dunkel
On 05/16/17 13:09, Harald Dunkel wrote:
> 
> I did serveral tests with LXC 2.0.8 on the host and systemd on the
> client: Both systemd 215-17+deb8u7 (Debian 8) and systemd 230-7~bpo8+2
> (Debian 8 backport) show this problem. cgroupfs-mount 1.3 is installed.
> 
> If I ditch LXC 2.x and cgroupfs-mount and use LXC 1.1.5 and /cgroup
> instead, then the same systemd-based container boots without problems.
> 
> If I install cgroupfs-mount again, then the systemd-based container
> still works.
> 
> If I remove /cgroup from /etc/fstab (still using cgroupfs-mount), then
> the systemd-based container still works.
> 
> Of course the host was rebooted between each of these steps. Obviously
> the old lxc 1.1.5 is more stable here.
> 

PS:

cgmanager and systemd-shim (whatever that is) were not installed,
either. Installing cgmanager seems to resolve the problem. Its not
in lxc's dependency list, but I wonder if this tool is a must-have?


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc 2.0.7: sysvinit on the host breaks systemd based containers

2017-05-16 Thread Harald Dunkel
On 05/04/17 21:00, Serge E. Hallyn wrote:
> 
> Sounds like just systemd refusing to boot because all cgroups are comounted?
> Are you sure that reverting to 1.1.5 fixes it, and it's not a newer systemd
> breaking it?
> 

I did serveral tests with LXC 2.0.8 on the host and systemd on the
client: Both systemd 215-17+deb8u7 (Debian 8) and systemd 230-7~bpo8+2
(Debian 8 backport) show this problem. cgroupfs-mount 1.3 is installed.

If I ditch LXC 2.x and cgroupfs-mount and use LXC 1.1.5 and /cgroup
instead, then the same systemd-based container boots without problems.

If I install cgroupfs-mount again, then the systemd-based container
still works.

If I remove /cgroup from /etc/fstab (still using cgroupfs-mount), then
the systemd-based container still works.

Of course the host was rebooted between each of these steps. Obviously
the old lxc 1.1.5 is more stable here.


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc 2.0.7: sysvinit on the host breaks systemd based containers

2017-05-12 Thread Harald Dunkel
On 05/04/17 21:00, Serge E. Hallyn wrote:
> 
> It would help to ask for more debugging information from systemd, 
> 
> lxc.init_cmd = /sbin/init log_target=console log_level=debug
> 
> as well as looking at /sys/fs/cgroup in the container while systemd is
> hung.

There is just a single line:

Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted


Here is /sys/fs/cgroup:

# ls -al /sys/fs/cgroup/
total 0
drwxr-xr-x 13 root root 260 May 12 20:28 .
drwxr-xr-x  8 root root   0 May 12 20:27 ..
dr-xr-xr-x  3 root root   0 May 12 20:28 blkio
dr-xr-xr-x  3 root root   0 May 12 20:28 cpu
dr-xr-xr-x  3 root root   0 May 12 20:28 cpuacct
dr-xr-xr-x  3 root root   0 May 12 20:28 cpuset
dr-xr-xr-x  3 root root   0 May 12 20:28 devices
dr-xr-xr-x  3 root root   0 May 12 20:28 freezer
dr-xr-xr-x  3 root root   0 May 12 20:28 memory
dr-xr-xr-x  3 root root   0 May 12 20:28 net_cls
dr-xr-xr-x  3 root root   0 May 12 20:28 net_prio
dr-xr-xr-x  3 root root   0 May 12 20:28 perf_event
dr-xr-xr-x  3 root root   0 May 12 20:28 pids


Logfile is attached. Hope this helps.


Regards
Harri

  lxc-start 20170512190429.286 INFO lxc_start_ui - 
tools/lxc_start.c:main:275 - using rcfile /data1/lxc/jessie1/config
  lxc-start 20170512190429.287 WARN lxc_confile - 
confile.c:config_pivotdir:1910 - lxc.pivotdir is ignored.  It will soon become 
an error.
  lxc-start 20170512190429.287 DEBUGlxc_monitor - 
monitor.c:lxc_monitord_spawn:309 - Going to wait for pid 4297.
  lxc-start 20170512190429.288 DEBUGlxc_monitor - 
monitor.c:lxc_monitord_spawn:328 - Trying to sync with child process.
  lxc-start 20170512190429.288 INFO lxc_start - 
start.c:lxc_check_inherited:235 - Closed inherited fd: 3.
  lxc-start 20170512190429.288 INFO lxc_start - 
start.c:lxc_check_inherited:235 - Closed inherited fd: 5.
  lxc-start 20170512190429.288 DEBUGlxc_monitor - 
monitor.c:lxc_monitord_spawn:366 - Using pipe file descriptor 6 for monitord.
  lxc-start 20170512190429.293 DEBUGlxc_monitor - 
monitor.c:lxc_monitord_spawn:343 - Sucessfully synced with child process.
  lxc-start 20170512190429.293 DEBUGlxc_monitor - 
monitor.c:lxc_monitord_spawn:312 - Finished waiting on pid 4297.
  lxc-start 20170512190429.294 INFO lxc_container - 
lxccontainer.c:do_lxcapi_start:804 - Attempting to set proc title to [lxc 
monitor] /data1/lxc jessie1
  lxc-start 20170512190429.295 INFO lxc_start - 
start.c:lxc_check_inherited:235 - Closed inherited fd: 3.
  lxc-start 20170512190429.295 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
LSM security driver nop
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:402 - processing: .reject_force_umount  # comment 
this to allow umount -f;  not recommended.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:567 - Adding native rule for reject_force_umount 
action 0.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:251 - Setting Seccomp rule to reject force 
umounts.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:570 - Adding compat rule for reject_force_umount 
action 0.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:251 - Setting Seccomp rule to reject force 
umounts.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:402 - processing: .[all].
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:402 - processing: .kexec_load errno 1.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:567 - Adding native rule for kexec_load action 327681.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:570 - Adding compat rule for kexec_load action 327681.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:402 - processing: .open_by_handle_at errno 1.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:567 - Adding native rule for open_by_handle_at action 
327681.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:570 - Adding compat rule for open_by_handle_at action 
327681.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:402 - processing: .init_module errno 1.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:567 - Adding native rule for init_module action 
327681.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:570 - Adding compat rule for init_module action 
327681.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:402 - processing: .finit_module errno 1.
  lxc-start 20170512190429.295 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:567 - Adding native rule for 

Re: [lxc-users] lxc-start: cgroups/cgfs.c: do_setup_cgroup_limits: 2037 No such file or directory - Error setting devices.deny to a for jessie1

2017-05-12 Thread Harald Dunkel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/12/17 20:50, Harald Dunkel wrote:
> Hi Serge,
> 
> The host is running sysvinit. It failed with lxc 2.0.7.
> 
> I would guess the problem was related to mounting cgroup. Both /cgroup and 
> /sys/fs/cgroup were mounted via /etc/fstab, and libvirt0 and cgroupfs-mount 
> were installed, too. I have removed all except cgroupfs-\ mount and installed 
> lxc 2.0.7 from scratch. Problem gone, as it seems.
> 

PS: Except that the systemd-based containers get stuck in systemd,
of course. Different thread.


Regards
Harri

-BEGIN PGP SIGNATURE-

iQEzBAEBCAAdFiEEH2V614LbR/u1O+a1Cp4qnmbTgcsFAlkWBRQACgkQCp4qnmbT
gctOrgf/aaeFye4rGbIqTC+bRBUqf02K3HYcYAsuWIP0A+T9p4y9Tgy0TuIp3Fgv
NfExCHjJiMtE0XSSe5tVn/1X8lzZRzZOqi3ZiyiXwdbzj0l/R02x9ge8QKzTzxPL
OOiuDt6KfU7As17vbthEUIQGl8YIYQLvSGtvZf/UDGbMLutPpeDt6an/h+mriuNb
/H/5NF7vkOhWg3R5Wn0FgKjScuj0KiUiszOvuK60ysSEwxiKcyT/jGk2fPkuu4dK
aXza+xuhLQEOHwGTEkzEkqPN+1K9gbCkGm6zIwyjWdeSsXStq69lx2GL0OKSr+mj
zVAmQ7B+x3nicXqH0FnOzaJmzMtj5A==
=fNFE
-END PGP SIGNATURE-
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start: cgroups/cgfs.c: do_setup_cgroup_limits: 2037 No such file or directory - Error setting devices.deny to a for jessie1

2017-05-12 Thread Harald Dunkel
Hi Serge,

On 05/12/17 15:59, Serge E. Hallyn wrote:
> Quoting Harald Dunkel (harald.dun...@aixigo.de):
>> Hi folks,
>>
>> my LXCs don't start anymore:
> 
> Odd, do_setup_cgroup_limits() seems to be called twice.
> 
> First time is sucessful,
> 
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.deny' set to 'a'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c *:* m'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'b *:* m'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 1:3 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 1:5 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 1:7 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 5:0 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 5:1 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 5:2 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 1:8 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 1:9 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 136:* rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 10:229 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 254:0 rm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 10:200 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 10:228 rwm'
>>   lxc-start 20170511140840.901 DEBUGlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2042 - cgroup 'devices.allow' set to 
>> 'c 10:232 rwm'
>>   lxc-start 20170511140840.901 INFO lxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2046 - cgroup has been setup
>>   lxc-start 20170511140840.930 DEBUGlxc_conf - 
>> conf.c:lxc_assign_network:3185 - move 'veth59JIW0'/'(null)' to '4419': .
>>   lxc-start 20170511140840.966 DEBUGlxc_conf - 
>> conf.c:lxc_assign_network:3185 - move 'vethTIN4QF'/'(null)' to '4419': .
> ...
> 
>>   lxc-start 20170511140841.710 ERRORlxc_cgfs - 
>> cgroups/cgfs.c:do_setup_cgroup_limits:2037 - No such file or directory - 
>> Error setting devices.deny to a for jessie1
>>   lxc-start 20170511140841.710 ERRORlxc_start - 
>> start.c:lxc_spawn:1236 - Failed to setup the devices cgroup for container 
>> "jessie1".
> 
> Second time fails.
> 
> Can you tell us the old (working) and new (nonworking) lxc versions?

The host is running sysvinit. It failed with lxc 2.0.7.

I would guess the problem was related to mounting cgroup. Both /cgroup
and /sys/fs/cgroup were mounted via /etc/fstab, and libvirt0 and
cgroupfs-mount were installed, too. I have removed all except cgroupfs-\
mount and installed lxc 2.0.7 from scratch. Problem gone, as it seems.

Thanx anyway for looking into this.


Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-start: cgroups/cgfs.c: do_setup_cgroup_limits: 2037 No such file or directory - Error setting devices.deny to a for jessie1

2017-05-11 Thread Harald Dunkel
Hi folks,

my LXCs don't start anymore:

# lxc-start -P /data1/lxc -n jessie1 -F
lxc-start: cgroups/cgfs.c: do_setup_cgroup_limits: 2037 No such file or 
directory - Error setting devices.deny to a for jessie1
lxc-start: start.c: lxc_spawn: 1236 Failed to setup the devices cgroup for 
container "jessie1".
lxc-start: start.c: __lxc_start: 1346 Failed to spawn container "jessie1".
lxc-start: tools/lxc_start.c: main: 366 The container failed to start.
lxc-start: tools/lxc_start.c: main: 370 Additional information can be obtained 
by setting the --logfile and --logpriority options.


All lights are on green:
# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.9.0-0.bpo.2-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig


lxc is version 1:2.0.7-2~bpo8+1 backported to Debian 8.


Detailed log is attached, of course. Every helpful comment is highly
appreciated.

Harri
  lxc-start 20170511140840.893 INFO lxc_start_ui - tools/lxc_start.c:main:275 - using rcfile /data1/lxc/jessie1/config
  lxc-start 20170511140840.893 WARN lxc_confile - confile.c:config_pivotdir:1910 - lxc.pivotdir is ignored.  It will soon become an error.
  lxc-start 20170511140840.893 WARN lxc_start - start.c:lxc_check_inherited:238 - Inherited fd: 3.
  lxc-start 20170511140840.893 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - LSM security driver nop
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:402 - processing: .reject_force_umount  # comment this to allow umount -f;  not recommended.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:567 - Adding native rule for reject_force_umount action 0.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:do_resolve_add_rule:251 - Setting Seccomp rule to reject force umounts.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:570 - Adding compat rule for reject_force_umount action 0.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:do_resolve_add_rule:251 - Setting Seccomp rule to reject force umounts.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:402 - processing: .[all].
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:402 - processing: .kexec_load errno 1.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:567 - Adding native rule for kexec_load action 327681.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:570 - Adding compat rule for kexec_load action 327681.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:402 - processing: .open_by_handle_at errno 1.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:567 - Adding native rule for open_by_handle_at action 327681.
  lxc-start 20170511140840.893 INFO lxc_seccomp - seccomp.c:parse_config_v2:570 - Adding compat rule for open_by_handle_at action 327681.
  lxc-start 20170511140840.894 INFO lxc_seccomp - seccomp.c:parse_config_v2:402 - processing: .init_module errno 1.
  lxc-start 20170511140840.894 INFO lxc_seccomp - seccomp.c:parse_config_v2:567 - Adding native rule for init_module action 327681.
  lxc-start 20170511140840.894 INFO lxc_seccomp - seccomp.c:parse_config_v2:570 - Adding compat rule for init_module action 327681.
  lxc-start 20170511140840.894 INFO lxc_seccomp - seccomp.c:parse_config_v2:402 - processing: .finit_module errno 1.
  lxc-start 20170511140840.894 INFO lxc_seccomp - seccomp.c:parse_config_v2:567 - Adding native rule for finit_module action 327681.
  lxc-start 20170511140840.894 WARN lxc_seccomp - seccomp.c:do_resolve_add_rule:270 - Seccomp: got negative for syscall: -10085: finit_module.
  lxc-start 

Re: [lxc-users] lxc 2.0.7: sysvinit on the host breaks systemd based containers

2017-05-04 Thread Harald Dunkel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 04/03/17 07:03, Harald Dunkel wrote:
> Hi folks,
> 
> using sysvinit-core on the host the systemd based containers get stuck in 
> /sbin/init. lxc-attach shows:
> 
> root@lxcclient:~# ps -ef UIDPID  PPID  C STIME TTY  TIME CMD 
> root 1 0  0 11:49 ?00:00:00 /sbin/init root24 
> 0  0 12:05 pts/000:00:00 /bin/bash root2624  0 12:05 pts/0
> 00:00:00 ps -ef
> 
> This host uses lxc 2.0.7, as it can be found in Jessie backports. See below 
> for a list of installed packages. The container was created on the same host.
> 
> I am bound to sysvinit-core on my HA systems. For lxc 1.1.5 there was no such 
> problem, so I wonder if there is a workaround on the host?
> 

PS: If I drop /cgroup from /etc/fstab on the host and reboot, then
the problem seems to be gone.

Does this ring a bell?


Regards
Harri

-BEGIN PGP SIGNATURE-

iQEzBAEBCAAdFiEEH2V614LbR/u1O+a1Cp4qnmbTgcsFAlkLW/0ACgkQCp4qnmbT
gcslBggAkco67qrn7kizxShPGrD7mghqhFJjqBBC9GlnyOo7xp4D3caPPUzH/MMj
+7czPs9CF/cG64FeWKuRRdRE9ZKxSG/TGpdwcb0ZdcipAkl6E7lcSdsjB9YOgaTl
BsvDYu55HFLhFAD/ETRb67SGaU5U+gxq/QF0XhLZJQHwzddvhZIOacJT5B9tYDEo
CJKK3cvXA1qcMwRvVdE2AyEKt0IJkKL22gY2rl0WwNYialpN+dsDwuSANBBBRITd
hW/DwFbT0Yb82kY0pV9Vc/ItJ3kxCU24OVa9cSZ/75bAUfQQfj/W9in6c/6x5PAi
1wAANfBLweGg9FAXx6wNpR1NUrf31A==
=Ltde
-END PGP SIGNATURE-
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc 2.0.7: sysvinit on the host breaks systemd based containers

2017-04-02 Thread Harald Dunkel
Hi folks,

using sysvinit-core on the host the systemd based containers get
stuck in /sbin/init. lxc-attach shows:

root@lxcclient:~# ps -ef
UIDPID  PPID  C STIME TTY  TIME CMD
root 1 0  0 11:49 ?00:00:00 /sbin/init
root24 0  0 12:05 pts/000:00:00 /bin/bash
root2624  0 12:05 pts/000:00:00 ps -ef

This host uses lxc 2.0.7, as it can be found in Jessie backports. See
below for a list of installed packages. The container was created
on the same host.

I am bound to sysvinit-core on my HA systems. For lxc 1.1.5 there was
no such problem, so I wonder if there is a workaround on the host?


Every helpful comment is highly appreciated.
Regards
Harri



LXC server:
root@lxc01:~# dpkg -l \*lxc\* \*systemd\* \*cgroup\* \*sysvinit\*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture   
 Description
+++--===-===-==
un  cgroup-bin  
 (no description available)
ii  cgroup-tools 0.41-6  amd64  
 control and monitor control groups (tools)
ii  cgroupfs-mount   1.3 all
 Light-weight package to set up cgroupfs mounts
ii  libcgroup1:amd64 0.41-6  amd64  
 control and monitor control groups (library)
ii  liblxc1  1:2.0.7-2~bpo8+1amd64  
 Linux Containers userspace tools (library)
un  libpam-systemd  
 (no description available)
ii  libsystemd0:amd64215-17+deb8u6   amd64  
 systemd utility library
ii  lxc  1:2.0.7-2~bpo8+1amd64  
 Linux Containers userspace tools
ii  lxcfs2.0.6-1~bpo8+1  amd64  
 FUSE based filesystem for LXC
ii  python3-lxc  1:2.0.7-2~bpo8+1amd64  
 Linux Containers userspace tools (Python 3.x bindings)
un  systemd 
 (no description available)
un  systemd-sysv
 (no description available)
un  sysvinit
 (no description available)
ii  sysvinit-core2.88dsf-59  amd64  
 System-V-like init utilities
ii  sysvinit-utils   2.88dsf-59  amd64  
 System-V-like utilities

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] policy for contributing patches?

2017-03-16 Thread Harald Dunkel
On 03/16/17 08:10, Stéphane Graber wrote:
> 
> This change has since been merged.
> 

Thanx very much

>> The process for these "pull requests" is described at
>> https://help.github.com/articles/creating-a-pull-request/
>> With a pull request, the maintainers can add your change with just a
>> click (if all are fine with the change!).
> 
> That's the preferred way to contribute though for LXC, contribution
> through the mailing-list is still possible. Albeit much slower and
> requires much more work on our side.
> 

Sorry for the additional effort. I would have resubmitted the
patch using github, if you had told me.

> The main issue with this patch submission was the format, it wasn't
> clearely identified with the [PATCH] prefix nor formatted using
> git-send-email so it wasn't quite as visible as it should have been, and
> needed a bit of work to then be sent for review.
> 
>> You would also need to complete the Contributor Agreement, found at
>> https://www.ubuntu.com/legal/contributors
> 
> That's not needed, none of the LXC projects require the Canonical CLA.
> 

Good to know.


Thanx again
Harri




signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] policy for contributing patches?

2017-03-14 Thread Harald Dunkel
Hi folks,

about 4 weeks ago I had sent a (simple) patch to a bug in
config/init/common/lxc-containers.in to the lxc-devel
mailing list. Problem: There was no response at all :-(.
Pretty disappointing. This was not my first patch. In the
past I found both lists lxc-users and lxc-devel quite
responsive.

Did I miss to follow some policy here? Do I have to use
github to post patches?


Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc 2.0: command get_cgroup failed for 'dom1': Permission denied

2016-10-19 Thread Harald Dunkel
On 10/18/2016 08:59 AM, Harald Dunkel wrote:
> Hi folks,
> 
> since lxc 2.0 my monitoring scripts return error messages about
> running system containers, e.g.:
> 
> % lxc-ls -P /data1/lxc --fancy jerry1
> lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed 
> for 'jerry1': Permission denied
> lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed 
> for 'jerry1': Permission denied
> lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed 
> for 'jerry1': Permission denied
> lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed 
> for 'jerry1': Permission denied
> NAME   STATE AUTOSTART GROUPS IPV4 IPV6
> jerry1 - 0 auto   --
> 
> Using strace the "permission denied" is not shown, but the
> output of lxc-ls is still broken.
> 
> This is pretty painful. I wouldn't like to do monitoring
> with root, if it can be avoided.
> 
> 
> Plattform is Jessie, lxc 2.0.4. No systemd.
> 

PS: systemd and the most recent lxc 2.0.5 didn't help,
unfortunately.

Using docker I can add the monitoring user to the "docker"
group. Very convenient. Maybe there is a similar construct
for lxc that I missed in the documentation?


Every helpful comment is highly appreciated
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc 2.0: command get_cgroup failed for 'dom1': Permission denied

2016-10-18 Thread Harald Dunkel
Hi folks,

since lxc 2.0 my monitoring scripts return error messages about
running system containers, e.g.:

% lxc-ls -P /data1/lxc --fancy jerry1
lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed for 
'jerry1': Permission denied
lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed for 
'jerry1': Permission denied
lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed for 
'jerry1': Permission denied
lxc-ls: commands.c: lxc_cmd_get_cgroup_path: 468 command get_cgroup failed for 
'jerry1': Permission denied
NAME   STATE AUTOSTART GROUPS IPV4 IPV6
jerry1 - 0 auto   --

Using strace the "permission denied" is not shown, but the
output of lxc-ls is still broken.

This is pretty painful. I wouldn't like to do monitoring
with root, if it can be avoided.


Plattform is Jessie, lxc 2.0.4. No systemd.

Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] which container is swapping?

2016-06-23 Thread Harald Dunkel
Hi Guido,

I would be highly interested in your script.


Thanx in advance
Harri

On 06/21/16 15:54, Jäkel, Guido wrote:
> Dear Harald,
> 
> years ago I scripted my own lxc-free to be used as something lxc-aware inside 
> the container. It's based on the memory controllers values, too. Please take 
> a look at memory.stats, too. Here, I get other values to calculate the values 
> for RSS+Cache, active, free and used RAM, too. 
> 
> It may act from outside (for all) or inside (for one) because I let 
> bind-mount the containers cgroup to the container. I'm still using the old 
> (legacy?) setup with one cgroup directory per container with all controllers 
> attached to it (/cgroup/lxc//foo), therefore I just have to 
> reach-in one directory via a calculated bind mount.
> 
> I'll may send you the script if you like. To be accurate, my current version 
> just reports the "other" values for free/used memory. But this is just 
> because I don't use swap space at all and you may easily add it  -- it's a 
> diskless bladecenter and all filesystems are NFS. And to my knowledge, the 
> only technical way to get swap on NFS is to loop-mount an image file - but 
> this is an abstruse one.
> 
> Greetings
> 
> Guido
> 

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] which container is swapping?

2016-06-21 Thread Harald Dunkel
Hi folks,

Is there some way to monitor local memory swapping *inside*
the container?

Long story:
I had to restrict memory usage for a set of containers, using
lines like

lxc.cgroup.memory.limit_in_bytes = 12G

in the config file. The memory limits are not the same for
all hosts.

Of course some containers don't play nice and use more memory
than allowed. The server starts swapping.

AFAICS I can examine /cgroup/lxc/.../memory.memsw.usage_in_bytes
and memory.usage_in_bytes on the server, but I would prefer to
monitor swapping inside the container. What would you suggest?

BTW, I wonder why lxc-info and lxc-top don't tell how much swap
space each container uses?

LXC is 1.1.5 (still).


Every helpful hint is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sysvinit with cgroup namespace

2016-04-21 Thread Harald Dunkel
On 04/21/16 08:05, Fajar A. Nugraha wrote:
> On Wed, Apr 20, 2016 at 6:50 PM, Harald Dunkel <harald.dun...@aixigo.de> 
> wrote:
>> Hi folks,
>>
>> AFAIR the idea of the containers was to provide isolation
>> between the host and the user-space instances.
>>
>> Are we loosing this with systemd support?
> 
> What makes you think that?
> 
> The host only needs systemd cgroup mount, it doesn't need to run systemd.
> 

AFAIU you cannot run systemd in a LXC container dom1, unless
these cgroup mount points are setup in dom0 for some initia-
lization. I am not sure if this still counts as "isolated".
Shouldn't systemd in dom1 just work, no matter what?


Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sysvinit with cgroup namespace

2016-04-20 Thread Harald Dunkel
Hi folks,

AFAIR the idea of the containers was to provide isolation
between the host and the user-space instances.

Are we loosing this with systemd support?


Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sysvinit with cgroup namespace

2016-04-20 Thread Harald Dunkel
Hi Serge,

On 04/06/16 17:18, Serge Hallyn wrote:
> Quoting KATOH Yasufumi (ka...@jazz.email.ne.jp):
>>
>> Will we be able to start a container on sysvinit with cgroup namespace
>> in the future release?
> 
> mkdir /sys/fs/cgroup
> mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup
> 
> You could argue that this shouldn't be needed, but if you
> don't do it you won't be able to start any systemd-based
> containers, and I'd rather bail out early with a clear error
> and simple fix, rather than get cryptic reports about certain
> containers not working.

I tried, but it did not work:

# grep /sys/fs/cgroup /etc/fstab
systemd /sys/fs/cgroup cgroup none,name=systemd

# mount /sys/fs/cgroup

# lxc-start -P /data2/lxc -n lxc10 -F
lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller 
mountpoint found
lxc-start: start.c: lxc_spawn: 1079 failed initializing cgroup support
lxc-start: start.c: __lxc_start: 1329 failed to spawn 'lxc10'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
setting the --logfile and --logpriority options.

???


Every helpful suggestion is highly appreciated
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] sysvinit with cgroup namespace

2016-04-12 Thread Harald Dunkel
On 04/06/2016 05:18 PM, Serge Hallyn wrote:
> Quoting KATOH Yasufumi (ka...@jazz.email.ne.jp):
>>
>> Will we be able to start a container on sysvinit with cgroup namespace
>> in the future release?
> 
> mkdir /sys/fs/cgroup
> mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup
> 

Or was it

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd

?

> You could argue that this shouldn't be needed, but if you
> don't do it you won't be able to start any systemd-based
> containers, and I'd rather bail out early with a clear error
> and simple fix, rather than get cryptic reports about certain
> containers not working.

Unfortunately the error message is hidden, unless you run

lxc-start -n mylxc -F

Without -F it just fails. There will be reports about broken
systemd containers for LXC, still. Maybe lxc-checkconfig
should complain?


I feel *very* bad about LXC becoming systemd-only.

Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Systemd as LXC 2.0 dependency ?

2016-04-12 Thread Harald Dunkel
Hi folks,

On 04/04/2016 05:50 PM, Serge Hallyn wrote:
> Quoting Milan Beneš (mi...@benesovi.eu):
>> Hello,
>> does anybody know if systemd is a requirement for LXC 2.0?
> 
> Systemd is not required.  A name=systemd cgroup mount is.  You
> can create that trivially
> 
> sudo mkdir /sys/fs/cgroup/systemd
> sudo mount -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd
> 

Should be "mkdir -p", AFAICT.

This is pretty painful. Obviously a simple entry in /etc/fstab
is not sufficient.

Would it be possible to add some "if test -d /sys/fs/cgroup/systemd"
to the code, bypassing the systemd support for native sysv init
on HA systems?


Thanx in advance
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] oracle linux 7 in LXC: ulimit problem for root

2015-11-25 Thread Harald Dunkel
On 11/25/2015 02:27 PM, Tamas Papp wrote:
> 
> 
> On 11/25/2015 02:07 PM, Harald Dunkel wrote:
>> On 11/25/2015 12:33 PM, Tamas Papp wrote:
>>> Check out /etc/security/limits.d/ too.
>>>
>> Very helpful hint, but there is just a file
>> 20-nproc.conf. Its all commented out:
>>
>> #*  softnproc 4096
>> #root   softnproc unlimited
>>
>>
> 
> Why are you sure, that it's something about the limits?
> What do you see actually?
> 

Very easy: Using

#* hard nofile 65536

in limits.conf I can login as root via ssh. With

* hard nofile 65536

ssh logins as root don't work. Sample session:

# ssh lxc1
Last login: Wed Nov 25 11:00:21 2015 from linux.example.com
Connection to lxc1 closed.
#

The system log shows

Nov 25 11:08:58 lxc1.example.com sshd[186]: pam_limits(sshd:session): 
Could not set limit for 'nofile': Operation not permitted
Nov 25 11:08:58 lxc1.example.com sshd[186]: pam_unix(sshd:session): 
session opened for user root by (uid=0)
Nov 25 11:08:58 lxc1.example.com sshd[186]: error: PAM: 
pam_open_session(): Permission denied

The documentation for limits.conf states clearly that a wildcard
construct like

* hard nofile 65536

does *not* apply to root. IMHO it shouldn't fail.


:-{
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] oracle linux 7 in LXC: ulimit problem for root

2015-11-25 Thread Harald Dunkel
Hi folks,

hopefully this is not too much ot for this list:

I am running Oracle Linux 7 in LXC. Problem: If I try to login
as root via ssh, then I am kicked out after authentication,
apparently due to an ulimit problem:

Nov 25 11:08:58 lxc1.example.com sshd[186]: pam_limits(sshd:session): Could not 
set limit for 'nofile': Operation not permitted
Nov 25 11:08:58 lxc1.example.com sshd[186]: pam_unix(sshd:session): session 
opened for user root by (uid=0)
Nov 25 11:08:58 lxc1.example.com sshd[186]: error: PAM: pam_open_session(): 
Permission denied

/etc/security/limits.conf says:

* hard nofile 65536
#root hard nofile 65536

Please note that the "root" line is commented out. According to
the documentation there is no reason for pam to touch root's
ulimit, but I have to comment out the other line as well to enable
ssh support for root.


How comes?

Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] oracle linux 7 in LXC: ulimit problem for root

2015-11-25 Thread Harald Dunkel
On 11/25/2015 12:33 PM, Tamas Papp wrote:
> 
> Check out /etc/security/limits.d/ too.
> 

Very helpful hint, but there is just a file
20-nproc.conf. Its all commented out:

#*  softnproc 4096
#root   softnproc unlimited


Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] monitoring containers using lxc-info (without being root)

2015-05-11 Thread Harald Dunkel
On 05/11/15 20:35, Stéphane Graber wrote:
 
 lxc-info -c doesn't read the container configuration, instead it
 connects to the container's command socket and asks the container what's
 the running configuration.
 
 That means that you need to run lxc-info as the same user which started
 the container for it to be able to contact the command socket.
 

Understood, but I cannot run all my containers as zabbix or nagios
user just for monitoring. sudo would be an option, but I wonder if the
socket could be created with a lxc group permission and to add the
monitoring account to this group? Maybe a second read-only socket to
obtain information from the container (without the control part) could
do the trick?

Just a suggestion, of course. Keep on your good work.

Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Advice for running LXC on a Debian host

2015-03-16 Thread Harald Dunkel
On Fri, 13 Mar 2015 13:34:22 +
Rory Campbell-Lange r...@campbell-lange.net wrote:
 
 Presently the Debian LXC wiki page at https://wiki.debian.org/LXC states
 LXC may not provide sufficient isolation at this time. 
 

This is about Wheezy, AFAIK. You should give Jessie a chance. 

Jessie's LXC provides apparmor support and other new 
features. It is based upon LXC 1.0.6 (plus some fixes, e.g.
systemd support introduced for 1.0.7). 

Debian's configure flags for LXC:

--disable-rpath \
--enable-doc \
--enable-api-docs \
--enable-apparmor \
--enable-selinux \
--disable-cgmanager \
--enable-capabilities \
--enable-examples \
--enable-python \
--disable-mutex-debugging \
--enable-lua \
--enable-bash \
--enable-tests \
--enable-configpath-log \
--with-distro=debian \
--with-init-script=sysvinit,systemd

Once Jessie is released, Debian will most likely move forward 
to LXC version 1.1.x. There is a good chance that this version 
will be backported to Jessie later.

But I always wondered why there are different LXC packages for 
Debian and Ubuntu? Debian's LXC includes several interesting
changes that might be useful for the Ubuntu version and
other host platforms as well, e.g using the right debootstrap 
mirror, fixing LSB headers, etc.


Regards
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to cancel lxc-autostart

2014-08-08 Thread Harald Dunkel
I am not familiar with Ubuntu's setup, but assuming it supports
sysv-init I would suggest to omit lxc in a dedicated run level.

If your default run level is 2 (specified in /etc/inittab), then
you could use update-rc.d to omit lxc in run level 3, e.g.

# update-rc.d lxc start 20 2 4 5 . stop 20 0 1 3 6 .

This means lxc is started in run levels 2, 4 and 5, and
stopped in 0, 1, 3 and 6.

If you need to boot without starting the containers, then you
can choose run level 3 on the kernel command line at boot time,
e.g.
linux /boot/vmlinuz root= ... quiet 3

grub2 allows you to modify the kernel command line before booting.
Using telinit you can change the run level at run time, e.g.
'telinit 2' to switch to run level 2 (to start your containers).


Hope this helps
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] how to lxc-stop the rest?

2014-08-07 Thread Harald Dunkel
Hi folks,

is there some smart way to stop the rest of the containers
not stopped by lxc-autostart -s -a?

I would like to avoid the device busy at shutdown time, and
manually looping through all running containers reported by
lxc-ls appears a little bit clumsy.

Should I make lxc.start.auto=1 mandatory for all containers
and rely upon group names only?

Every helpful comment is highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC leaking ptys?

2014-07-16 Thread Harald Dunkel
On 07/15/14 16:24, Serge Hallyn wrote:
 Quoting Harald Dunkel (harald.dun...@aixigo.de):

 Nothing unusual. The log files show a lot of ssh sessions run by
 our monitoring software (active just for a few milliseconds), plus
 my own ssh sessions for maintenance. Thats all.
 
 Ah, interesting.  That could be fun to try to reproduce.  But I
 would expect that to be transient, so that the monitoring software
 could just retry, by which point the other very-short connections
 should be closed and new connections should succeed...
 

To give you more information about these high-frequency ssh sessions:
Every 5 minutes I check the status of the LXC clients, using something
like
ssh $server test -d /cgroup/lxc/$client  echo ${server}:${client} OK

There are 4 ssh jobs per client, i.e. I get about 120 ssh sessions
within a few seconds. Then its silent for the next 5 minutes. Important
is that these ssh sessions don't need a pty. They succeed, even when
an interactive ssh session would report stdin: not a tty. Surely
they do not overlap each other.

I have reduced this to run a test every 30 minutes now, but actually I
do not see a problem with this. I monitor all my LXC and KVM servers
using this script. Only interactive sessions on the server are affected.


Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC leaking ptys?

2014-07-15 Thread Harald Dunkel
On 07/15/14 06:04, Serge Hallyn wrote:
 
 The lxc.pts is actually only used in that setting it to 0 will
 not mount a /dev/pts.  Setting it to 1024 is the same as setting
 it to 1.
 

I see. The important point is that its a private pool.

The Centos template has set lxc.autodev = 0. I haven't seen this
in the generated config files for Debian. Is autodev=0 still the
default, as lxc.container.conf(5) seems to indicate?

 The server is running 31 containers at the moment, so I wouldn't
 be surprised to run into some limitation. But I wonder wth?
 
 Me too.  Nothing in /var/log/auth.log or syslog?
 

Nothing unusual. The log files show a lot of ssh sessions run by
our monitoring software (active just for a few milliseconds), plus
my own ssh sessions for maintenance. Thats all.

After the 2 incidents I moved the centos containers to another server.
I couldn't reproduce the problem on this host by now, but of course
it has a different load.


Regards
Harri

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC leaking ptys?

2014-07-10 Thread Harald Dunkel
Hi folks,

is it possible that LXC is leaking ptys? I have seen it twice
by now that a lxc-start -n centos65_host got stuck. When I
tried to open another ssh session to the LXC server, then ssh
reported

stdin: not a tty

I could login, though.

max_pty on the server is 4096; the containers are configured with
lxc.pts 1024.

The server is running 31 containers at the moment, so I wouldn't
be surprised to run into some limitation. But I wonder wth?
Shouldn't LXC print out some error message instead of going Guru?

Every helpful comment would be highly appreciated
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-1.0.3: lxc-start gets stuck

2014-05-13 Thread Harald Dunkel
Hi folks,

Using the HEAD of the stable-1.0 branch:

Sometimes lxc-start gets stuck. I haven't found a reliable
way to reproduce this (yet), but it seems to be related to
starting and stopping a lot of almost identical LXCs in
parallel without sleep in between (e.g. 8 containers with
Centos 6.x).

Does this ring a bell somewhere? Every helpful comment is
highly appreciated

Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-stop doesn't stop centos, waits for the timeout

2014-02-21 Thread Harald Dunkel
Hi folks,

Seems that a Centos 65 container doesn't stop on lxc-stop
within the timeout. lxc-stop -k works, but thats very
rude. For my Debian containers there is no such problem.

The config file was generated by the template script. LXC
is version 1.0 (BTW, congratulations).

Every helpful suggestion would be highly appreciated.
Harri
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users