Re: [lxc-users] Ubuntu Lxc disable Cgroup memory controller

2015-05-01 Thread CDR
It does not work
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.17.1-031701-generic
root=UUID=3475cb20-ce8d-4f1a-bb5a-1f2dd0aeb515 ro elevator=noop
net.ifnames=1 biosdevname=0 selinux=0 ipv6.disable=1 apparmor=0
cgroup_disable=memory
but lxc-checkconfig says the memory
Cgroup memory controller: enabled



On Fri, May 1, 2015 at 4:56 AM, Dirk Geschke d...@lug-erding.de wrote:

 Hi,

  I need to come up with a way to disable the cgroup memory controller in
 the
  kernel command line, for Ubuntu 14.04 and Centos 7.
  Is there a way to do this? I found a kernel command line to enable memory
  controller, but not t disable.

 yes, it's the same command but with disable:

 cgroup_disable=memory

 Best regards

 Dirk

 --
 +--+
 | Dr. Dirk Geschke   / Plankensteinweg 61/ 85435 Erding|
 | Telefon: 08122-559448  / Mobil: 0176-96906350 / Fax: 08122-9818106   |
 | d...@geschke-online.de / d...@lug-erding.de  / kont...@lug-erding.de |
 +--+
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Ubuntu Lxc disable Cgroup memory controller

2015-04-30 Thread CDR
I need to come up with a way to disable the cgroup memory controller in the
kernel command line, for Ubuntu 14.04 and Centos 7.
Is there a way to do this? I found a kernel command line to enable memory
controller, but not t disable.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container cannot write to /var/run

2015-04-06 Thread CDR
A symlink tu /run in the host ot in the same container?
Philip

On Monday, April 6, 2015, Fajar A. Nugraha l...@fajar.net wrote:

 On Sun, Apr 5, 2015 at 6:29 AM, Bostjan Skufca bost...@a2o.si
 javascript:; wrote:
  Is systemd now supported as LXC guest's init system?

 Short answer: not yet

 It's work in progress. Among others, systemd in container needs lxcfs,
 and one of the issues you'd find is
 https://github.com/lxc/lxcfs/issues/17 , which is just closed today,
 so chances are most people don't have that fix yet.

  On 4 April 2015 at 23:31, CDR vene...@gmail.com javascript:; wrote:
 
  My Fedora 20 container, on a Ubuntu 14.04 server, cannot write to
  /var/run. Is there a secret reason that I use to fix it?
  Other containers with non-systemd OSs can write just fine to /var/run.


 My fix for centos7 is pretty simple: remove /var/run directory on
 the container rootfs, and add a symlink to /run

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org javascript:;
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Container cannot write to /var/run

2015-04-04 Thread CDR
My Fedora 20 container, on a Ubuntu 14.04 server, cannot write to /var/run.
Is there a secret reason that I use to fix it?
Other containers with non-systemd OSs can write just fine to /var/run.

Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 1.1.1 has been released! (Was: LXC 1.0.7 has been released!)

2015-03-16 Thread CDR
Is it available automatically in Ubuntu lcx-daily?

On Mon, Mar 16, 2015 at 6:28 PM, Stéphane Graber stgra...@ubuntu.com
wrote:

 Gah, yes, I re-use my old e-mails and yes, I forgot to change the
 subject... That's LXC 1.1.1, not LXC 1.0.7!

 On Mon, Mar 16, 2015 at 06:26:55PM -0400, Stéphane Graber wrote:
  Hello everyone,
 
  The first LXC 1.1 bugfix release is now out!
 
  This includes all bugfixes committed to master since the release of LXC
 1.1.
 
  As usual, the full announcement and changelog may be found at:
  https://linuxcontainers.org/lxc/news/
 
  And our tarballs can be downloaded from:
  https://linuxcontainers.org/lxc/downloads/
 
 
  LXC 1.1 is the latest stable release of LXC. Note that this isn't a long
  term support release and it will only be supported for a year.
 
  For production environments, we still recommend using LXC 1.0 which we
  will be supporting until April 2019.
 
 
  Stéphane Graber
  On behalf of the LXC development team

 --
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] installation of package fails in container

2015-03-11 Thread CDR
This is a privileged container, so it should have all the rights.
What exactly comes after the equal sign?
lxc.cap.drop = 

On Wed, Mar 11, 2015 at 6:13 AM, Fajar A. Nugraha l...@fajar.net wrote:

 It says:  cpio: cap_set_file

 So you might want to try lxc.cap.drop =  (man lxc.container.conf).

 Or simply chroot to the container fs from the host (NOT lxc-attach), and
 repeat your yum install command.

 --
 Fajar

 On Sat, Mar 7, 2015 at 2:09 PM, CDR vene...@gmail.com wrote:

 is there any workaround?

 On Fri, Mar 6, 2015 at 8:44 PM, Király, István lak...@d250.hu wrote:

 This happens with rpm's. ...

 It usually works if you add it to the initial package list, in the
 template.

 On Sat, Mar 7, 2015 at 12:26 AM, Bostjan Skufca bost...@a2o.si wrote:

 What is your host running?

 b.


 On 6 March 2015 at 22:25, CDR vene...@gmail.com wrote:

 Downloading packages:
 mtr-0.85-7.el7.x86_64.rpm
 |  71 kB  00:00:00
 Running transaction check
 Running transaction test
 Transaction test succeeded
 Running transaction
   Installing :
 2:mtr-0.85-7.el7.x86_64
 1/1
 Error unpacking rpm package 2:mtr-0.85-7.el7.x86_64
 error: unpacking of archive failed on file /usr/sbin/mtr: cpio:
 cap_set_file
   Verifying  :
 2:mtr-0.85-7.el7.x86_64
 1/1

 Failed:
   mtr.x86_64 2:0.85-7.el7

 This is a privileged Centos 7 container.
 What am I missing here?



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] installation of package fails in container

2015-03-06 Thread CDR
Downloading packages:
mtr-0.85-7.el7.x86_64.rpm
|  71 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing :
2:mtr-0.85-7.el7.x86_64
1/1
Error unpacking rpm package 2:mtr-0.85-7.el7.x86_64
error: unpacking of archive failed on file /usr/sbin/mtr: cpio: cap_set_file
  Verifying  :
2:mtr-0.85-7.el7.x86_64
1/1

Failed:
  mtr.x86_64 2:0.85-7.el7

This is a privileged Centos 7 container.
What am I missing here?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] installation of package fails in container

2015-03-06 Thread CDR
is there any workaround?

On Fri, Mar 6, 2015 at 8:44 PM, Király, István lak...@d250.hu wrote:

 This happens with rpm's. ...

 It usually works if you add it to the initial package list, in the
 template.

 On Sat, Mar 7, 2015 at 12:26 AM, Bostjan Skufca bost...@a2o.si wrote:

 What is your host running?

 b.


 On 6 March 2015 at 22:25, CDR vene...@gmail.com wrote:

 Downloading packages:
 mtr-0.85-7.el7.x86_64.rpm
 |  71 kB  00:00:00
 Running transaction check
 Running transaction test
 Transaction test succeeded
 Running transaction
   Installing :
 2:mtr-0.85-7.el7.x86_64
 1/1
 Error unpacking rpm package 2:mtr-0.85-7.el7.x86_64
 error: unpacking of archive failed on file /usr/sbin/mtr: cpio:
 cap_set_file
   Verifying  :
 2:mtr-0.85-7.el7.x86_64
 1/1

 Failed:
   mtr.x86_64 2:0.85-7.el7

 This is a privileged Centos 7 container.
 What am I missing here?

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users




 --
  Király István
 +36 209 753 758
 lak...@d250.hu
 http://d250.hu/

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Problem with memory.memsw.limit_in_bytes on Ubuntu 14.04.

2015-02-26 Thread CDR
It should work with 2G. The rest a bad excuse. It has become a standard in
the software industry.

On Thu, Feb 26, 2015 at 8:49 AM, Fajar A. Nugraha l...@fajar.net wrote:

 On Thu, Feb 26, 2015 at 6:51 PM, PONCET Anthony ff...@msn.com wrote:

 Hello,
 I'm trying to used the memory.memsw.limit_in_bytes, and I have this error
 when I trying to set this : lxc-cgroup -n c_name
 memory.memsw.limit_in_bytes 2G


 The name does say limit_in_bytes, not limit_in_human-friendly_format.
 Did you try putting 2147483648 instead of 2G?

 --
 Fajar

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-12 Thread CDR
mount
/dev/sda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs
(rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
systemd on /sys/fs/cgroup/systemd type cgroup
(rw,noexec,nosuid,nodev,none,name=systemd)

I followed the steps

This is all I get
lxc-start -n c7v -F
mount: sysfs already mounted or /usr/lib/x86_64-linux-gnu/lxc/sys busy
mount: according to mtab, sysfs is mounted on /sys
systemd 208 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA
+SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
Detected virtualization 'lxc'.

Welcome to CentOS Linux 7 (Core)!

Failed to install release agent, ignoring: No such file or directory
Cannot add dependency job for unit display-manager.service, ignoring: Unit
display-manager.service failed to load: No such file or directory.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Listening on Delayed Shutdown Socket.
Failed to open /dev/autofs: No such file or directory
Failed to initialize automounter: No such file or directory
[FAILED] Failed to set up automount Arbitrary Executable File Formats File
System Automount Point.
See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
Unit proc-sys-fs-binfmt_misc.automount entered failed state.
[  OK  ] Listening on Journal Socket.
 Mounting Huge Pages File System...
 Starting Create static device nodes in /dev...
 Starting Apply Kernel Variables...
 Starting Journal Service...
[  OK  ] Started Journal Service.
[  OK  ] Reached target Encrypted Volumes.
 Mounting Debug File System...
 Mounting POSIX Message Queue File System...
 Mounting FUSE Control File System...



On Thu, Feb 12, 2015 at 3:08 AM, Fajar A. Nugraha l...@fajar.net wrote:

 Do you have cgroupfs-mount installed?
 Did you follow the steps I pasted?
 Did you run lxc-start -F and look at the output?

 On Thu, Feb 12, 2015 at 3:05 PM, CDR vene...@gmail.com wrote:
  What changes do I need to do at the host level so my provileged systemd
  containers may work?
  I am using Ubuntu 14.04, and there is systemd
 
  On Thu, Feb 12, 2015 at 3:00 AM, Fajar A. Nugraha l...@fajar.net
 wrote:
 
  You DID read that I asked for lxc-start -F?
 
  It's entirely possible that your container's systemd freeze, thus
  nothing is listening on its tty1. And if you don't have systemd cgroup
  mounted on the host (which is what cgroupfs-mount is for), it would
  certainly be the case.
 
  --
  Fajar
 
  On Thu, Feb 12, 2015 at 2:50 PM, CDR vene...@gmail.com wrote:
   I cannot get past this
   root@ubuserver:/var/lib/lxc/c7v# lxc-console -n c7v
  
   Connected to tty 1
   Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a
   itself
  
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-12 Thread CDR
What changes do I need to do at the host level so my provileged systemd
containers may work?
I am using Ubuntu 14.04, and there is systemd

On Thu, Feb 12, 2015 at 3:00 AM, Fajar A. Nugraha l...@fajar.net wrote:

 You DID read that I asked for lxc-start -F?

 It's entirely possible that your container's systemd freeze, thus
 nothing is listening on its tty1. And if you don't have systemd cgroup
 mounted on the host (which is what cgroupfs-mount is for), it would
 certainly be the case.

 --
 Fajar

 On Thu, Feb 12, 2015 at 2:50 PM, CDR vene...@gmail.com wrote:
  I cannot get past this
  root@ubuserver:/var/lib/lxc/c7v# lxc-console -n c7v
 
  Connected to tty 1
  Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a
 itself
 
 
  On Thu, Feb 12, 2015 at 2:41 AM, CDR vene...@gmail.com wrote:
 
  I cannot make this solution work.
  There are a lot of errors.
 
 
  On Thu, Feb 12, 2015 at 1:19 AM, CDR vene...@gmail.com wrote:
 
  Thanks. I think Serge  may want to change permanently the config and
  other in the on-line template so Centos 7 does work right away.
 
 
  On Thu, Feb 12, 2015 at 1:08 AM, Fajar A. Nugraha l...@fajar.net
 wrote:
 
  So after some expmeriments, this is what I have: http://goo.gl/7p3nUI
  - create c7 container, e.g.
  lxc-create -n c7v -t download -B zfs --zfsroot rpool/lxc -- -d centos
  -r 7 -a amd64
 
  - edit config file. See config on that gdrive link,  look for
  Manual additions
 
  - place script/systemd_create_cgroup in the correct path (whatever you
  use the config file), chmod 700
 
  - start the container.
 
  This is similar with what I did for fedora20, on
 
 
 https://lists.linuxcontainers.org/pipermail/lxc-users/2014-May/007069.html
 
  What works that previously doesn't:
  - lxc-console
  - default apparmor container profile (so, for example, you can't mess
  up host's cgroup allocation)
  - default lxc.cap.drop (although you might want to remove sys_nice if
  you have apps that depend on it)
  - rsyslogd now always start correctly (previously there could be stale
  PIDs on /var/run)
 
  What still does NOT work: unpriviledged container
  I tried backporting F22's systemd-218 plus ubuntu vivid's changes
  (RPMS and SPECS folder), but it wasn't enough to run unpriviledged
  container.
 
  It should be reasonably safer than allow-the-container-to-do-anything
  approach previously needed for c7.
 
  --
  Fajar
 
  On Fri, Feb 6, 2015 at 9:35 PM, CDR vene...@gmail.com wrote:
   Thanks.
   I love Ubuntu as a host for LXC. I just got addicted to systemctl
 and
   writing *.service files. It is much more sophisticated than the
 older
   way of
   starting and stopping applications.
  
   On Fri, Feb 6, 2015 at 8:40 AM, Fajar A. Nugraha l...@fajar.net
   wrote:
  
   On Fri, Feb 6, 2015 at 8:15 PM, CDR vene...@gmail.com wrote:
Thanks for the response.
I disable selinux and a apparmor routinely. My containers are
 just
a way
to
separate applications, there are no users accessing them, nothing
bad
can
happen.
So basically you are saying that there is no way to run Centos 7
under
an
Ubuntu host.
  
   No. What I'm saying is when you use c7 container (and possible most
   newer-systemd-based distros) under ubuntu host:
   - you can't use lxc-console
   - root on your container can mess up the host
  
   It shouldn't really matter for your use case, since lxc-attach
   works
   just fine (you DO know about lxc-attach?), and you don't really
 care
   about user access anyway.
  
   This should improve in the future as debian/ubuntu is also moving
   towards systemd (lxcfs is supposed to help), however currently the
   required level of support/integration is just not there yet.
  
   Since your main use case is separate applications, docker might
 be
   a
   better candidate. And when you use c7-based docker container under
 c7
   host, you might even get better protection since they integrate
   selinux.
  
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 
 
 
 
 
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-11 Thread CDR
I cannot make this solution work.
There are a lot of errors.


On Thu, Feb 12, 2015 at 1:19 AM, CDR vene...@gmail.com wrote:

 Thanks. I think Serge  may want to change permanently the config and other
 in the on-line template so Centos 7 does work right away.


 On Thu, Feb 12, 2015 at 1:08 AM, Fajar A. Nugraha l...@fajar.net wrote:

 So after some expmeriments, this is what I have: http://goo.gl/7p3nUI
 - create c7 container, e.g.
 lxc-create -n c7v -t download -B zfs --zfsroot rpool/lxc -- -d centos
 -r 7 -a amd64

 - edit config file. See config on that gdrive link,  look for
 Manual additions

 - place script/systemd_create_cgroup in the correct path (whatever you
 use the config file), chmod 700

 - start the container.

 This is similar with what I did for fedora20, on
 https://lists.linuxcontainers.org/pipermail/lxc-users/2014-May/007069.html

 What works that previously doesn't:
 - lxc-console
 - default apparmor container profile (so, for example, you can't mess
 up host's cgroup allocation)
 - default lxc.cap.drop (although you might want to remove sys_nice if
 you have apps that depend on it)
 - rsyslogd now always start correctly (previously there could be stale
 PIDs on /var/run)

 What still does NOT work: unpriviledged container
 I tried backporting F22's systemd-218 plus ubuntu vivid's changes
 (RPMS and SPECS folder), but it wasn't enough to run unpriviledged
 container.

 It should be reasonably safer than allow-the-container-to-do-anything
 approach previously needed for c7.

 --
 Fajar

 On Fri, Feb 6, 2015 at 9:35 PM, CDR vene...@gmail.com wrote:
  Thanks.
  I love Ubuntu as a host for LXC. I just got addicted to systemctl and
  writing *.service files. It is much more sophisticated than the older
 way of
  starting and stopping applications.
 
  On Fri, Feb 6, 2015 at 8:40 AM, Fajar A. Nugraha l...@fajar.net
 wrote:
 
  On Fri, Feb 6, 2015 at 8:15 PM, CDR vene...@gmail.com wrote:
   Thanks for the response.
   I disable selinux and a apparmor routinely. My containers are just a
 way
   to
   separate applications, there are no users accessing them, nothing bad
   can
   happen.
   So basically you are saying that there is no way to run Centos 7
 under
   an
   Ubuntu host.
 
  No. What I'm saying is when you use c7 container (and possible most
  newer-systemd-based distros) under ubuntu host:
  - you can't use lxc-console
  - root on your container can mess up the host
 
  It shouldn't really matter for your use case, since lxc-attach works
  just fine (you DO know about lxc-attach?), and you don't really care
  about user access anyway.
 
  This should improve in the future as debian/ubuntu is also moving
  towards systemd (lxcfs is supposed to help), however currently the
  required level of support/integration is just not there yet.
 
  Since your main use case is separate applications, docker might be a
  better candidate. And when you use c7-based docker container under c7
  host, you might even get better protection since they integrate
  selinux.
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-11 Thread CDR
I cannot get past this
root@ubuserver:/var/lib/lxc/c7v# lxc-console -n c7v

Connected to tty 1
Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself


On Thu, Feb 12, 2015 at 2:41 AM, CDR vene...@gmail.com wrote:

 I cannot make this solution work.
 There are a lot of errors.


 On Thu, Feb 12, 2015 at 1:19 AM, CDR vene...@gmail.com wrote:

 Thanks. I think Serge  may want to change permanently the config and
 other in the on-line template so Centos 7 does work right away.


 On Thu, Feb 12, 2015 at 1:08 AM, Fajar A. Nugraha l...@fajar.net wrote:

 So after some expmeriments, this is what I have: http://goo.gl/7p3nUI
 - create c7 container, e.g.
 lxc-create -n c7v -t download -B zfs --zfsroot rpool/lxc -- -d centos
 -r 7 -a amd64

 - edit config file. See config on that gdrive link,  look for
 Manual additions

 - place script/systemd_create_cgroup in the correct path (whatever you
 use the config file), chmod 700

 - start the container.

 This is similar with what I did for fedora20, on

 https://lists.linuxcontainers.org/pipermail/lxc-users/2014-May/007069.html

 What works that previously doesn't:
 - lxc-console
 - default apparmor container profile (so, for example, you can't mess
 up host's cgroup allocation)
 - default lxc.cap.drop (although you might want to remove sys_nice if
 you have apps that depend on it)
 - rsyslogd now always start correctly (previously there could be stale
 PIDs on /var/run)

 What still does NOT work: unpriviledged container
 I tried backporting F22's systemd-218 plus ubuntu vivid's changes
 (RPMS and SPECS folder), but it wasn't enough to run unpriviledged
 container.

 It should be reasonably safer than allow-the-container-to-do-anything
 approach previously needed for c7.

 --
 Fajar

 On Fri, Feb 6, 2015 at 9:35 PM, CDR vene...@gmail.com wrote:
  Thanks.
  I love Ubuntu as a host for LXC. I just got addicted to systemctl and
  writing *.service files. It is much more sophisticated than the older
 way of
  starting and stopping applications.
 
  On Fri, Feb 6, 2015 at 8:40 AM, Fajar A. Nugraha l...@fajar.net
 wrote:
 
  On Fri, Feb 6, 2015 at 8:15 PM, CDR vene...@gmail.com wrote:
   Thanks for the response.
   I disable selinux and a apparmor routinely. My containers are just
 a way
   to
   separate applications, there are no users accessing them, nothing
 bad
   can
   happen.
   So basically you are saying that there is no way to run Centos 7
 under
   an
   Ubuntu host.
 
  No. What I'm saying is when you use c7 container (and possible most
  newer-systemd-based distros) under ubuntu host:
  - you can't use lxc-console
  - root on your container can mess up the host
 
  It shouldn't really matter for your use case, since lxc-attach works
  just fine (you DO know about lxc-attach?), and you don't really care
  about user access anyway.
 
  This should improve in the future as debian/ubuntu is also moving
  towards systemd (lxcfs is supposed to help), however currently the
  required level of support/integration is just not there yet.
 
  Since your main use case is separate applications, docker might be a
  better candidate. And when you use c7-based docker container under c7
  host, you might even get better protection since they integrate
  selinux.
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-11 Thread CDR
Thanks. I think Serge  may want to change permanently the config and other
in the on-line template so Centos 7 does work right away.


On Thu, Feb 12, 2015 at 1:08 AM, Fajar A. Nugraha l...@fajar.net wrote:

 So after some expmeriments, this is what I have: http://goo.gl/7p3nUI
 - create c7 container, e.g.
 lxc-create -n c7v -t download -B zfs --zfsroot rpool/lxc -- -d centos
 -r 7 -a amd64

 - edit config file. See config on that gdrive link,  look for
 Manual additions

 - place script/systemd_create_cgroup in the correct path (whatever you
 use the config file), chmod 700

 - start the container.

 This is similar with what I did for fedora20, on
 https://lists.linuxcontainers.org/pipermail/lxc-users/2014-May/007069.html

 What works that previously doesn't:
 - lxc-console
 - default apparmor container profile (so, for example, you can't mess
 up host's cgroup allocation)
 - default lxc.cap.drop (although you might want to remove sys_nice if
 you have apps that depend on it)
 - rsyslogd now always start correctly (previously there could be stale
 PIDs on /var/run)

 What still does NOT work: unpriviledged container
 I tried backporting F22's systemd-218 plus ubuntu vivid's changes
 (RPMS and SPECS folder), but it wasn't enough to run unpriviledged
 container.

 It should be reasonably safer than allow-the-container-to-do-anything
 approach previously needed for c7.

 --
 Fajar

 On Fri, Feb 6, 2015 at 9:35 PM, CDR vene...@gmail.com wrote:
  Thanks.
  I love Ubuntu as a host for LXC. I just got addicted to systemctl and
  writing *.service files. It is much more sophisticated than the older
 way of
  starting and stopping applications.
 
  On Fri, Feb 6, 2015 at 8:40 AM, Fajar A. Nugraha l...@fajar.net wrote:
 
  On Fri, Feb 6, 2015 at 8:15 PM, CDR vene...@gmail.com wrote:
   Thanks for the response.
   I disable selinux and a apparmor routinely. My containers are just a
 way
   to
   separate applications, there are no users accessing them, nothing bad
   can
   happen.
   So basically you are saying that there is no way to run Centos 7 under
   an
   Ubuntu host.
 
  No. What I'm saying is when you use c7 container (and possible most
  newer-systemd-based distros) under ubuntu host:
  - you can't use lxc-console
  - root on your container can mess up the host
 
  It shouldn't really matter for your use case, since lxc-attach works
  just fine (you DO know about lxc-attach?), and you don't really care
  about user access anyway.
 
  This should improve in the future as debian/ubuntu is also moving
  towards systemd (lxcfs is supposed to help), however currently the
  required level of support/integration is just not there yet.
 
  Since your main use case is separate applications, docker might be a
  better candidate. And when you use c7-based docker container under c7
  host, you might even get better protection since they integrate
  selinux.
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-top

2015-02-10 Thread CDR
That is why I asked he question, the q does not exit.


On Tue, Feb 10, 2015 at 6:13 AM, Dwight Engen dwight.en...@oracle.com
wrote:

 On Mon, 9 Feb 2015 22:33:37 -0800
 CDR vene...@gmail.com wrote:

  Just out of curiosity, how do I exit lxc-top in Ubuntu 14.04?
  The letter q means quiet. I do a Ctrl-C, to exit, but I am sure
  there must be a cleaner way. The man lxc-top says nothing about this

 As the man page says, pressing q should make it quit. Ctrl-C is
 fine too: since it is just reading state,  interrupting it in the
 middle of that won't harm anything.
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-top

2015-02-09 Thread CDR
Just out of curiosity, how do I exit lxc-top in Ubuntu 14.04?
The letter q means quiet. I do a Ctrl-C, to exit, but I am sure there
must be a cleaner way. The man lxc-top says nothing about this
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY Question

2015-02-06 Thread CDR
a) Sorry about the fonts
b) All my containers are unconfined
c) My app does crash but no dumps. It is Asterisk, and it is compiled with
debug information, etc.
my ulimit -a
core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 1048576
max locked memory   (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files  (-n) 1048576
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) unlimited
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

what would it prevent Asterisk from duping a core file on crashing?


On Thu, Feb 5, 2015 at 5:11 PM, Fajar A. Nugraha l...@fajar.net wrote:

 On Fri, Feb 6, 2015 at 1:19 AM, CDR vene...@gmail.com wrote:
 
  I need to use TYY=9 in a container, how do I achieve that?

 You could probably start by NOT using big fonts in html mail when
 posting to the list.
 That being said, what do you mean tyy=9? did you mean tty? If yes,
 try man lxc.container.conf (look for lxc.tty) as well as man
 lxc-console

  Also I feel that my apps in the container crash more than when installed
 in a virtual machine, ceteris paribus.

 feel without data is not really helpful, is it? The usual methods of
 debugging a crash should apply (e.g. reading app logs, using gdb), and
 should apply whether it's in the host, container, or VM. Once you know
 what the problem is, its easier to determine whether the problem has
 something to do with container or not.

 When troubleshooting containter-related problems on Ubuntu, a starting
 point is to run it as normal (i.e. not unprivileged) container, and
 use lxc.aa_profile = unconfined (you probably have that already if
 using centos container). Then again, if you know that the app WILL
 crash even its in a VM, then most likely it's the app's fault, not
 related to whether it's in container or not.

  How do I write in the configuration that the container has the same
 importance as the host?

 It should already is by default. That is, if you don't specify any
 cgroups limit.

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] TTY Question

2015-02-06 Thread CDR
Thanks.
I finally found how to change the font.
Besides that, I will keep researching why it does not store a core dump.


On Fri, Feb 6, 2015 at 2:36 AM, Fajar A. Nugraha l...@fajar.net wrote:

 On Fri, Feb 6, 2015 at 6:05 AM, CDR vene...@gmail.com wrote:
  a) Sorry about the fonts

 You're still replying using the same fonts. I find this really
 annoying, so this will be my last response to you. Hopefully others
 are willing to help.

  b) All my containers are unconfined
  c) My app does crash but no dumps. It is Asterisk, and it is compiled
 with
  debug information, etc.
  my ulimit -a
  core file size  (blocks, -c) unlimited

 Try man core, look for not produced. I suspect this is permission
 or suid issues.

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-06 Thread CDR
Thanks.
I love Ubuntu as a host for LXC. I just got addicted to systemctl and
writing *.service files. It is much more sophisticated than the older way
of starting and stopping applications.

On Fri, Feb 6, 2015 at 8:40 AM, Fajar A. Nugraha l...@fajar.net wrote:

 On Fri, Feb 6, 2015 at 8:15 PM, CDR vene...@gmail.com wrote:
  Thanks for the response.
  I disable selinux and a apparmor routinely. My containers are just a way
 to
  separate applications, there are no users accessing them, nothing bad can
  happen.
  So basically you are saying that there is no way to run Centos 7 under an
  Ubuntu host.

 No. What I'm saying is when you use c7 container (and possible most
 newer-systemd-based distros) under ubuntu host:
 - you can't use lxc-console
 - root on your container can mess up the host

 It shouldn't really matter for your use case, since lxc-attach works
 just fine (you DO know about lxc-attach?), and you don't really care
 about user access anyway.

 This should improve in the future as debian/ubuntu is also moving
 towards systemd (lxcfs is supposed to help), however currently the
 required level of support/integration is just not there yet.

 Since your main use case is separate applications, docker might be a
 better candidate. And when you use c7-based docker container under c7
 host, you might even get better protection since they integrate
 selinux.

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console not working on centos 7 container

2015-02-06 Thread CDR
Thanks for the response.
I disable selinux and a apparmor routinely. My containers are just a way to
separate applications, there are no users accessing them, nothing bad can
happen.
So basically you are saying that there is no way to run Centos 7 under an
Ubuntu host.
Pretty amazing, if I may say.
I think somebody dropped the ball.


On Fri, Feb 6, 2015 at 4:30 AM, Fajar A. Nugraha l...@fajar.net wrote:

 On Fri, Feb 6, 2015 at 3:25 AM, CDR vene...@gmail.com wrote:
  In Ubuntu 14.04 fully updated and lxc latest.1.1, a container with
 Centos 7
  never allows connection via lxc-console. It stays as below.
  If you start the container with -F, you can see how it boots and indeed
 you
  can log in via the console.
 
  lxc-console -n centos7
 
  Connected to tty 1
  Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a
 itself
 
  Is there possible workaround?

 Probably not.

 Thanks to systemd, the only way you could start a c7 container under
 ubuntu should be if you use

 lxc.aa_profile = unconfined
 lxc.mount.auto =
 lxc.cap.drop =

 (or don't specify the last two lines while using your own config file,
 not using centos.common.conf). That would pretty much mean the
 container could access everything on the host, and my simple test of
 running agetty tty1 inside the container pretty much screwed the
 host.

 If you exclusively need c7, it would probably easier to just use a c7
 host as well, and use their supported method (i.e. docker). That way
 you'd at least get selinux protection on the container as well, which
 should prevent it from doing bad stuff to the host. Plus you don't
 have to deal with the mess that is systemd (since they remove it and
 replace with fakesystemd). You won't be able to get a login prompt
 either, but at least it's a safer and supported way to run c7 inside
 a container.

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] TTY Question

2015-02-05 Thread CDR
​I need to use TYY=9 ​in a container, how do I achieve that?
Also I feel that my apps in the container crash more than when installed in
a virtual machine, ceteris paribus.
How do I write in the configuration that the container has the same
importance as the host?
I actually don't run anything on the host, it is just a container for my
containers. Memory and CPU is more than 10 times what my apps need, so
there is no OOM error.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-console not working on centos 7 container

2015-02-05 Thread CDR
In Ubuntu 14.04 fully updated and lxc latest.1.1, a container with Centos 7
never allows connection via lxc-console. It stays as below.
If you start the container with -F, you can see how it boots and indeed you
can log in via the console.

lxc-console -n centos7

Connected to tty 1
Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself

Is there possible workaround?

I can give access to a developer to the box if needed.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unix Sockets communications between containers

2014-11-12 Thread CDR
I tested Tokudb long ago but strange enough you cannot have huge-pages
enabled in the server. My servers have a minimum of 500 G of RAM, so the
idea is a no-go. All virtualization technologies benefit from huge-pages. I
don´t know if the Tokudb developers live in a parallel universe.

On Tue, Nov 11, 2014 at 10:29 PM, Fajar A. Nugraha l...@fajar.net wrote:

 On Wed, Nov 12, 2014 at 9:27 AM, CDR vene...@gmail.com wrote:
 
  That is how we do business now, over TCP. By the way, I downloaded a new
 derivative of Mysql, http://paralleluniverse-inc.com/, and it seems, in
 my tests, several times faster than any other version, at least for this
 query
  select count(*) from table; where table has 550 million records. I have
 the exact same table on Mariadb and regular mysql, and it takes around 10
 times longer to get the result, on the same vmware datastore. Do I have to
 think that this results are not real and I am perhaps doing something
 wrong? If anybody can test this free technology, I would be grateful. They
 claim to work in parallel.
 


 Your question would be more appropriate on mysql list.

 However here's some comments from me:
 - select count(*) is NOT a suitable query for performance measurement.
 Short version: some storage engines cheat, they don't exactly do the
 count, only returns the estimate.
 - if you read
 http://www.paralleluniverse-inc.com/parallel_universe_5.5-3.1_usage_guide.txt
 , you'll see that only certain kinds of queries will benefit from
 their tech, and even then you must explicitly set some variables
 - for generic use on a single server, I currently use tokudb (on some
 server) and mariadb with tokudb engine (on some others). I'm happy
 with the results so far

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Unix Sockets communications between containers

2014-11-11 Thread CDR
Dear friends
I have a container with mysql and wish to have all other containers, and
the host, being able to use a socket to post queries to my database. I
thought of sharing a common host-directory, such as /temp. Once all
containers can access the same directory, will they actually be able to
talk to mysql? Mysql uses sockets to communicate with applications in the
same box. It is much faster and uses far less resources than tcp. Does this
make any sense? What would it take to make this scenario work?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unix Sockets communications between containers

2014-11-11 Thread CDR
This is fascinating. I will try and report if it does work.
Now, suppose the container is a mount that at the same time it is exported
an NFS share. Will the computers that are remotely mounting that share, be
able to use the socket for querying mysql? That opens a realm of
possibilities for my current business. Believe or not my client sells
access to mysql databases, in real time.


On Tue, Nov 11, 2014 at 2:52 PM, Serge Hallyn serge.hal...@ubuntu.com
wrote:

 Quoting Michael H. Warfield (m...@wittsend.com):
  On Tue, 2014-11-11 at 20:20 +0100, Hans Feldt wrote:
   With a dir potentially you get a bunch of other sockets available in
 the container, how can such
   security issue be handled?
 
  Use tailored application specific directories for the sockets?  That's
  no different than using application specific subdirectories for temp
  files.  Even if it's just one socket in one directory, creating that
  additional directory provides the isolation from other sockets you
  desire while supporting socket recreation as Serge points out.

 Right, I was thinking like how cgmanager does it.

 -serge
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unix Sockets communications between containers

2014-11-11 Thread CDR
That is how we do business now, over TCP. By the way, I downloaded a new
derivative of Mysql, http://paralleluniverse-inc.com/, and it seems, in my
tests, several times faster than any other version, at least for this query
select count(*) from table; where table has 550 million records. I have the
exact same table on Mariadb and regular mysql, and it takes around 10 times
longer to get the result, on the same vmware datastore. Do I have to think
that this results are not real and I am perhaps doing something wrong? If
anybody can test this free technology, I would be grateful. They claim to
work in parallel.


On Tue, Nov 11, 2014 at 3:33 PM, Michael H. Warfield m...@wittsend.com
wrote:

 On Tue, 2014-11-11 at 15:03 -0500, CDR wrote:
  This is fascinating. I will try and report if it does work.

  Now, suppose the container is a mount that at the same time it is
  exported an NFS share. Will the computers that are remotely mounting
  that share, be able to use the socket for querying mysql? That opens a
  realm of possibilities for my current business. Believe or not my
  client sells access to mysql databases, in real time.

 Not going to work.  Think about it.  You're exporting an AF_UNIX (local
 pipe) socket defined in a directory over a UDP RPC interface.  The other
 side will (if it understands it) see an AF_UNIX socket local to them.
 They won't be connected.  That will be true of just about any remote
 file system.

 You also said in your OP...

  It is much faster and uses far less resources than tcp. Does this make
  any sense?

 So, you'd be replacing TCP with a UNIX socket over RPC over UDP and
 expect that to be more efficient?  You've got a higher respect for NFS
 than most of us.  Over NFS, that scheme would consume more resources,
 even if it could work.

 It won't work and, even if it could work, it would be dog slow and
 probably insecure.

 Use TCP for your remote connections and a locally bound UNIX socket for
 your host local connections.  If you have to have the connections be
 homogeneous, then go with TCP.  From your description of that business
 model, unless you are transferring truly massive amounts of data per SQL
 query, the performance of the TCP connection for your local
 container-to-container connections is not going to be your bottleneck
 and NFS would only make things worse.

  On Tue, Nov 11, 2014 at 2:52 PM, Serge Hallyn
  serge.hal...@ubuntu.com wrote:
  Quoting Michael H. Warfield (m...@wittsend.com):
   On Tue, 2014-11-11 at 20:20 +0100, Hans Feldt wrote:
With a dir potentially you get a bunch of other sockets
  available in the container, how can such
security issue be handled?
  
   Use tailored application specific directories for the
  sockets?  That's
   no different than using application specific subdirectories
  for temp
   files.  Even if it's just one socket in one directory,
  creating that
   additional directory provides the isolation from other
  sockets you
   desire while supporting socket recreation as Serge points
  out.
 
  Right, I was thinking like how cgmanager does it.
 
  -serge
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 
 
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users

 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |
 http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of
 all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Bug bug bug

2014-11-08 Thread CDR
This is my ulimits
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 1048576
max locked memory   (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files  (-n) 1048576
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) unlimited
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

Also, I added swap space,

free -g
  totalusedfree  shared  buff/cache
available
Mem:177  59 116   0   0
116
Swap:   269   0 269

and it makes no difference.

It his is not a swap issue, nor a ulimit issue, where can it be the problem?
Federico


On Sat, Nov 8, 2014 at 5:23 AM, Guido Jäkel g.jae...@dnb.de wrote:

 Hi,

 googleing for  pthread_join  leads to
 http://www.ibm.com/developerworks/library/l-memory-leaks/  , an article
 about memory consumption of POSIX threads (and potential leaks if rejoin
 fails).

 From this, you can see that every thread needs at least memory for the
 stack. It is said that the default may be 10MB. And if you want to start 50
 instances of Asterix, this will lead to 50*n+10MB = n*0.5G stack size,
 where n is the number of threads this quoted 'taskprocessor' will try to
 start.

 Maybe you need at least more virtual memory to satisfy this requirements
 (e.g.
 http://stackoverflow.com/questions/344203/maximum-number-of-threads-per-process-in-linux).
 It seems that you have 19G swap for your ~180GB RAM machine. Maybe you need
 more even it will be unused. For a quick test, you may consider to use some
 file (instead of a partition) as additional swapspace and you may assign a
 lower priority to it. Or maybe you just have to adjust some 'ulimits'.

 Guido


 On 08.11.2014 03:36, CDR wrote:
  There is something very wrong with LXC in general, it does not matter the
  OS or even the kernel version. My OS is Ubuntu 14.04.
  I have a Centos 6.6 container with mysql and 50 instances of Asterisk
 12.0,
  plus opensips.
  The memory is limited to 100G, but it does not matter if I limit it or
 not.
  It crashes when I start the 50 Asterisk processes.
  ​ The err​or message is below.
 
  MySql starts fine and uses large-pages, memlocked. It uses 60G, so there
 is
  plenty of available memory left.
 
   free -g
   total   used   free sharedbuffers cached
  Mem:   177163 13  0  0 97
  -/+ buffers/cache: 66110
  Swap:   19  0 19
 
  ​​
 
 
  ​I tried the same ​container in Fedora 21 and the outcome is identical,
 and
  it matters not if the technology is plain lxc or libvirt-lxc.
 
  This is my containers config:
   lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
  lxc.mount.entry = sysfs sys sysfs defaults  0 0
 
 
  lxc.tty = 4
  lxc.pts = 1024
  lxc.cgroup.devices.deny = a
  lxc.cgroup.devices.allow = c 1:3 rwm
  lxc.cgroup.devices.allow = c 1:5 rwm
  lxc.cgroup.devices.allow = c 5:1 rwm
  lxc.cgroup.devices.allow = c 5:0 rwm
  lxc.cgroup.devices.allow = c 4:0 rwm
  lxc.cgroup.devices.allow = c 4:1 rwm
  lxc.cgroup.devices.allow = c 1:9 rwm
  lxc.cgroup.devices.allow = c 1:8 rwm
  lxc.cgroup.devices.allow = c 136:* rwm
  lxc.cgroup.devices.allow = c 5:2 rwm
  lxc.cgroup.devices.allow = c 254:0 rwm
  lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
  lxc.cgroup.devices.allow = b 7:* rwm# loop*
  lxc.cgroup.memory.limit_in_bytes =  107374182400
  lxc.mount.auto = cgroup
 
  lxc.utsname = parallelu
  lxc.autodev = 1
  lxc.aa_profile = unconfined
 
  lxc.network.type=macvlan
  lxc.network.macvlan.mode=bridge
  lxc.network.link=eth1
  lxc.network.name = eth0
  lxc.network.flags = up
  lxc.network.hwaddr = 00:c8:a0:7d:84:cf
  lxc.network.ipv4 = 0.0.0.0/25
 
 
 
  Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
  default_listener_shutdown: pthread_join(): Cannot allocate memory
  [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
  __allocate_taskprocessor: Unable to start taskprocessor listener for
  taskprocessor 2ad8515c-c1eb-46ab-b53a-d63c84a56192
  [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
  default_listener_shutdown: pthread_join(): Cannot allocate memory
  [Nov  7 21:12:05] ERROR[1205]: taskprocessor.c:614
  __allocate_taskprocessor: Unable to start taskprocessor listener for
  taskprocessor 1fe67cd3-b65f-491a-aa59-a089dcba26a5
  [Nov  7 21:12:05] ERROR[1205]: taskprocessor.c:245
  default_listener_shutdown: pthread_join(): Cannot allocate memory
  [Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:614
  __allocate_taskprocessor

[lxc-users] Bug bug bug

2014-11-07 Thread CDR
There is something very wrong with LXC in general, it does not matter the
OS or even the kernel version. My OS is Ubuntu 14.04.
I have a Centos 6.6 container with mysql and 50 instances of Asterisk 12.0,
plus opensips.
The memory is limited to 100G, but it does not matter if I limit it or not.
It crashes when I start the 50 Asterisk processes.
​ The err​or message is below.

MySql starts fine and uses large-pages, memlocked. It uses 60G, so there is
plenty of available memory left.

 free -g
 total   used   free sharedbuffers cached
Mem:   177163 13  0  0 97
-/+ buffers/cache: 66110
Swap:   19  0 19

​​


​I tried the same ​container in Fedora 21 and the outcome is identical, and
it matters not if the technology is plain lxc or libvirt-lxc.

This is my containers config:
 lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0


lxc.tty = 4
lxc.pts = 1024
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rwm
lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
lxc.cgroup.devices.allow = b 7:* rwm# loop*
lxc.cgroup.memory.limit_in_bytes =  107374182400
lxc.mount.auto = cgroup

lxc.utsname = parallelu
lxc.autodev = 1
lxc.aa_profile = unconfined

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth1
lxc.network.name = eth0
lxc.network.flags = up
lxc.network.hwaddr = 00:c8:a0:7d:84:cf
lxc.network.ipv4 = 0.0.0.0/25



Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor 2ad8515c-c1eb-46ab-b53a-d63c84a56192
[Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov  7 21:12:05] ERROR[1205]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor 1fe67cd3-b65f-491a-aa59-a089dcba26a5
[Nov  7 21:12:05] ERROR[1205]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor 34d41f19-2936-4e0a-a626-ceb386ff3a1f
[Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor 204873a6-b595-4e82-ae02-0b2a3ee37fdc
[Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor 7711ffdc-57c6-48e2-8f43-3fe4b396c405
[Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor 3eed34af-a070-4c8b-96ee-c9e1f92756c8
[Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Compilation fails in Fedora 21

2014-11-06 Thread CDR
I am running the server version and also I did install python3.
Before I issue make rpm, I can see that it detects python3, when
configure finishes.


On Wed, Nov 5, 2014 at 11:46 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Wed, 2014-11-05 at 23:09 -0500, CDR wrote:
 Requires: /bin/bash /bin/sh /usr/bin/python3 libc.so.6()(64bit)
 libc.so.6(GLIBC_2.15)(64bit) libc.so.6(GLIBC_2.2.5)(64bit)
 libc.so.6(GLIBC_2.3)(64bit) libc.so.6(GLIBC_2.3.4)(64bit)
 libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.8)(64bit)
 libcap.so.2()(64bit) liblxc.so.1()(64bit) libpthread.so.0()(64bit)
 libpthread.so.0(GLIBC_2.2.5)(64bit) libselinux.so.1()(64bit)
 libutil.so.1()(64bit) rtld(GNU_HASH)
 Processing files: lxc-libs-1.1.0-0.1.alpha2.fc21.x86_64
 error: File not found by glob:
 /root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/_lxc*
 error: File not found by glob:
 /root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/lxc/*

 RPM build errors:
 File listed twice: /etc/sysconfig/lxc
 File not found by glob:
 /root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/_lxc*
 File not found by glob:
 /root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/lxc/*
 Makefile:906: recipe for target 'rpm' failed
 make: *** [rpm] Error 1

 1) Fedora 21 is in BETA.  Expect breakage.  Thank you for reporting.
 Come back and play again.

 2) Beta - Use the packaged bundles or beta testing is pretty much
 useless.  I don't even have F21B downloaded yet.  Expect at least a week
 before I can test as a host platform.  I would be pleasantly surprised
 if someone beats me to it...

 3) It looks like you don't have Python3 installed.  Verify that you have
 the Python v3 packages are installed or all bets are off.  In Fedora 20,
 they came from rpmfusion (as is lxc*) and I have hear some discussion if
 Python3 would be included in F21 or not.  I haven't tried F21 yet.
 There may be problems with the detection or the make rpm defaults.
 I'm pretty sure we assume that Python3 is present on certain levels of
 Fedora, though I may be mistaken.

 4) Fedora 21 is a paradigm shift.  There are now branches for desktop
 and server.  What are you running and what have you installed on it?

 Regards,
 Mike
 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Compilation fails in Fedora 21

2014-11-05 Thread CDR
Requires: /bin/bash /bin/sh /usr/bin/python3 libc.so.6()(64bit)
libc.so.6(GLIBC_2.15)(64bit) libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.3)(64bit) libc.so.6(GLIBC_2.3.4)(64bit)
libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.8)(64bit)
libcap.so.2()(64bit) liblxc.so.1()(64bit) libpthread.so.0()(64bit)
libpthread.so.0(GLIBC_2.2.5)(64bit) libselinux.so.1()(64bit)
libutil.so.1()(64bit) rtld(GNU_HASH)
Processing files: lxc-libs-1.1.0-0.1.alpha2.fc21.x86_64
error: File not found by glob:
/root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/_lxc*
error: File not found by glob:
/root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/lxc/*


RPM build errors:
File listed twice: /etc/sysconfig/lxc
File not found by glob:
/root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/_lxc*
File not found by glob:
/root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha2.fc21.x86_64/usr/lib64/python3.3/site-packages/lxc/*
Makefile:906: recipe for target 'rpm' failed
make: *** [rpm] Error 1
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Using Docker Hub images with LXC

2014-10-21 Thread CDR
fantastic

On Tue, Oct 21, 2014 at 3:46 PM, Ranjib Dey dey.ran...@gmail.com wrote:
 awesome :-)

 On Tue, Oct 21, 2014 at 12:45 PM, Robin Monjo robinmo...@gmail.com wrote:

 Hi all,

 I built a tool to download raw root file systems from the Docker Hub so
 they can be used with LXC. Hope this may be useful for some of you.

 https://github.com/robinmonjo/dlrootfs

 Cheers,
 R.

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Networking macvlan has a low limit

2014-10-20 Thread CDR
I have reached a vey low limit on a macvlan network. One physical
interface, eth0, returns an error

lxc_conf - failed to set 'eth1' up : Device or resource busy
  lxc-start 1413870325.642 ERRORlxc_conf - failed to setup netdev
  lxc-start 1413870325.642 ERRORlxc_conf - failed to setup the
network for 'dialer-9'
  lxc-start 1413870325.642 ERRORlxc_start - failed to setup
the container
  lxc-start 1413870325.642 ERRORlxc_sync - invalid sequence
number 1. expected 2

lxc.network.type = macvlan
lxc.network.macvlan.mode=bridge
lxc.network.flags = up
lxc.network.link = eth0
lxc.network.name= eth1
lxc.network.hwaddr = 00:16:3e:08:92:c4
lxc.network.ipv4 = 0.0.0.0/24


This happens after I start 6 containers only, sharing the same interface eth0.
How do I start more containers using the same interface and macvlan?
Note: this ubuntu 14.04 fully updated and LXC daily.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Am I missing something?

2014-09-22 Thread CDR
You cannot have a macvlan bridge on an bridge interface, only on a real
ethernet device, like eth0, eth1, etc.
if you want to use a bridge, then use

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name= eth1
lxc.network.hwaddr = 00:de:f0:ca:d4:32
lxc.network.ipv4 = 0.0.0.0/24


On Mon, Sep 22, 2014 at 6:51 PM, Erik Haller erik.hal...@gmail.com wrote:

 Here is my production configuration:

 lxc.network.type = macvlan
 lxc.network.macvlan.mode = bridge
 lxc.network.flags = up
 lxc.network.link = eth0
 lxc.network.ipv4 = 192.168.7.70/16
 lxc.network.ipv4.gateway = 192.168.7.1
 # ...# mounts point

 lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
 lxc.mount.entry = sysfs sys sysfs defaults  0 0

 # /lib/modules is needed for iptables/ufw
 lxc.mount.entry = /lib/modules /var/lib/lxc/lemon/rootfs/lib/modules none
 ro,bind 0 0
 # Nice to mount host home directories
 lxc.mount.entry = /home /var/lib/lxc/lemon/rootfs/home none rw,rbind 0 0

 # network interface name is limited to 16 chars
 lxc.hook.pre-start = /bin/sh -c exec mount -n -o remount,rw
 /var/lib/lxc/lemon/rootfs
 lxc.hook.pre-start = /bin/sh -c ip link add link eth0 name lemon type
 macvlan mode bridge  ip link set lemon up
 lxc.hook.pre-start = /bin/sh -c ip route add 192.168.7.70 dev lemon ||
 true

 lxc.hook.post-stop = /bin/sh -c ip route del 192.168.7.70 || true
 lxc.hook.post-stop = /bin/sh -c ip link set lemon down  ip link del
 lemon
 lxc.hook.post-stop = /bin/sh -c exec mount -n -o remount,rw
 /var/lib/lxc/lemon/rootfs

 Couple of notes:

1. This a Debian lxc 0.9.0-aplha3 system. Works fine with
lxc-stop|lxc-start. It's been in production ~ year.
2. Hostname: lemon, change hostname throughout.
3. Disable br0 bridge. Reboot. Try the above setup and get it running.
macvlan and older bridging may be incompatible in linux.
4. Change your lxc.network.link to eth0, do not use br0.
5. Don't enable ip_forward. I don't have it enabled.
6. Don't set the mac address. Remove lxc.network.hwaddr
7. Note: macvlan takes 10-30 seconds of pinging from a different host
after lxc-start. This is normal.



 On Mon, Sep 22, 2014 at 7:43 AM, Chris Kloiber ckloi...@cedardoc.com
 wrote:

  Trying to wrap my mind around the lxc networking. I need to configure
 each container with it’s own static IP on the same subnet as the host. I
 think that requires a “macvlan/bridge” setup like this:

   lxc.network.type = macvlan

 lxc.network.macvlan.mode = bridge

 lxc.network.flags = up

 lxc.network.link = br0

 lxc.network.ipv4 = 10.0.0.11/24 10.0.0.255

 lxc.network.ipv4.gateway = 10.0.0.1

 lxc-network.name = eth0

 lxc.network.mtu = 1500
 lxc.network.hwaddr= 00:16:3e:97:81:42

  But this goes nowhere. The host does have a properly configured br0
 device (this is an ol6 system, btw) and net.ipv4.ip_forward = 1 Is
 enabled. The host iptables are disabled.

  I’ve been beating my head against this for a week now. Please help, or
 tell me what other information I can provide. Thank you.


  —

 *Chris Kloiber*

 *CEDAR Document Technologies*

 One Ravinia Drive, Suite 200

 Atlanta, GA 30346

 1(404)436-2470 (office)

 1(678)512-9636 (cell)



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Compile fails under Fedora

2014-09-21 Thread CDR
This particular machine is using Fedora 20, hence, I mentioned make rpm.
I have both Ubuntu and Fedora in my company. Cannot make up my mind.


On Sun, Sep 21, 2014 at 7:08 AM, Fajar A. Nugraha l...@fajar.net wrote:
 For critical line-of-business normally you wouldn't use git snapshot.
 Unless you're a developer (which you already mentioned you're not).

 I'd sugest you either:
 - use whatever released version already packaged, or
 - learn how to fix it manually, or hire someone to do so (which should
 be very easy, just a couple lines of edit on the spec file, or in this
 case apply an already-submitted patch), or
 - use ubuntu with either official packages or daily ppa, which any
 normal ubuntu user should be able to do (no dev skill required)

 --
 Fajar

 On Sun, Sep 21, 2014 at 6:35 AM, CDR vene...@gmail.com wrote:
 This technology is being used on critical line-of-business
 applications, at least in my company.
 I wish that Stepahane or other would follow a more predictable
 patch-releasing schedule.


 On Sat, Sep 20, 2014 at 6:22 PM, Michael H. Warfield m...@wittsend.com 
 wrote:
 On Sat, 2014-09-20 at 03:23 -0400, CDR wrote:
 I did a git pull and ´when I issued a make rpm, it failed

 error: Installed (but unpackaged) file(s) found:
/usr/lib/systemd/system/lxc-net.service
 RPM build errors:
 File listed twice:
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Compile fails under Fedora

2014-09-20 Thread CDR
This technology is being used on critical line-of-business
applications, at least in my company.
I wish that Stepahane or other would follow a more predictable
patch-releasing schedule.


On Sat, Sep 20, 2014 at 6:22 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Sat, 2014-09-20 at 03:23 -0400, CDR wrote:
 I did a git pull and ´when I issued a make rpm, it failed

 error: Installed (but unpackaged) file(s) found:
/usr/lib/systemd/system/lxc-net.service
 RPM build errors:
 File listed twice:
 /usr/lib64/python3.3/site-packages/_lxc-0.1-py3.3.egg-info
 File listed twice: /usr/lib64/python3.3/site-packages/_lxc.cpython-33m.so
 File listed twice: /usr/lib64/python3.3/site-packages/lxc/__init__.py
 File listed twice: /usr/lib64/python3.3/site-packages/lxc/__pycache__
 File listed twice:
 /usr/lib64/python3.3/site-packages/lxc/__pycache__/__init__.cpython-33.pyc
 File listed twice:
 /usr/lib64/python3.3/site-packages/lxc/__pycache__/__init__.cpython-33.pyo
 File listed twice: /usr/libexec/lxc/lxc-autostart-helper
 File listed twice: /usr/libexec/lxc/lxc-devsetup
 File listed twice: /usr/libexec/lxc/lxc-user-nic
 Installed (but unpackaged) file(s) found:
/usr/lib/systemd/system/lxc-net.service
 make: *** [rpm] Error 1

 Any idea how can I compile the software?

 You raised this issue on 08/09 for 1.1.0alpha1 under Fedora 20.  At that
 time yours was the second report and I was already looking into it.  You
 then raised the issue again on the -devel list on 09/05 for 1.1.0alpha1
 under CentOS 7 - exact same issue.  At that time the patches had already
 been submitted on 08/25 to Stéphane and were then under review.  I
 responded to you to that effect on 09/09 along with the reason for the
 original failure and a pointer to my patches that had been posted to the
 list.

 The patches are still being reviewed as he's been exceptionally busy
 lately and the patches are fairly involved and involved some
 disagreements in approach which were discussed in private E-Mail between
 the involved parties.

 The patches have not been committed to git master to date and he's
 working on integrating the changes.  As a consequence, the answer I gave
 to you on 09/09 on the -devel list remains the same and is equally
 applicable to 1.1.0alpha1 and to git master...

 i.e. ... You can either apply the patches I posted to the -devel list
 several weeks ago or you can wait for Stéphane to commit the fully
 integrated patches to git master.  At this time, applying my changes
 will result in some patch warnings due to others submitting some warning
 changes in parallel patches.

 I would recommend monitoring the -devel list for further (cough)
 developments (yes, pun intended).

 Regards,
 Mike

 On Tue, 2014-09-09 at 12:06 -0400, Michael H. Warfield wrote:
 This was due to a refactoring of the upstart init network code nearly
 a
 month ago by someone, AFAIK, not currently on the list which created
 some files in an incorrect location and the creation of dependencies
 on
 it in the systemd code.  Patches for this fax paux have been submitted
 by me and Stéphane is currently evaluating my patch set to correct the
 problems that were created by the earlier submission by another that
 inadvertently broke all the rpm based systems.  This was reported
 several weeks ago and I submitted my fix, after some private
 discussion,
 on 08/25.

 Please review the following thread, starting on 08/25/2014, on this
 list
 for the patches and some discussion...


 Thanks,
 Philip
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users


 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Console hangs

2014-09-16 Thread CDR
In Ubuntu I have a container that boots fine
but this does now work
lxc-console -n ivr

Connected to tty 1
Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself

it hangs there for ever

what are the steps to figure the problem? The container is Fedora 20.

Yours
Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Ubuntu behind Centos

2014-09-09 Thread CDR
I noticed that in Centos 7 I have the command lxc-top while in Ubuntu
14.04 there is no such command. I like Ubuntu, but whay the issue with
is vital command?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Thrilled to announce the the launch of Flockport.com to this list

2014-09-09 Thread CDR
I already convered 100% of my company to Virtualization + LXC, It is the
optiomal technology mix. A few powerful virtual machines with several dozen
containers sharing the same kernel.


On Tuesday, September 9, 2014, Kevin LaTona li...@studiosola.com wrote:


 On Sep 9, 2014, at 12:49 PM, Tobby Banerjee to...@flockport.com
 javascript:; wrote:

  Hi LXC users,
 
  I am extremely excited to announce the launch of Flockport.com to this
 list, its home so to speak.


 Sure looks like a great idea, that has appeared at the right moment in
 LXC's timeline.

 Have to think that it will get more people using containers by giving them
 another option to choose from.

 All without having to deal with LXC's previous learning curve hurdles.

 Best of luck and thanks for making it happen.

 -Kevin
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org javascript:;
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Ubuntu 14 Container Fedora 20 ignoring most NICs

2014-09-05 Thread CDR
I have a Fedora 20 container with 10 nics
 lspci -bv | grep -i ethernet -A1
04:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 161
--
05:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 162
--
0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 192
--
0c:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 193
--
0d:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 194
--
13:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 224
--
14:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 225
--
15:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 226
--
1b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 256
--
1c:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 257

and
 lspci -nn|grep -i eth
04:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
05:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
0b:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
0c:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
0d:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
13:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
14:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
15:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
1b:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)
1c:00.0 Ethernet controller [0200]: VMware VMXNET3 Ethernet Controller
[15ad:07b0] (rev 01)

but Fedora does load only 2
ls /sys/class/net
eth0  eth1  lo
this only happens in a Fedora 20 container, not in a real box a even
in Virtual Machine.

What am I missing? Al, NICs are created with a different mac and are
made like this:
lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth0
lxc.network.name = eth0
lxc.network.flags = up
lxc.network.hwaddr = 00:ce:41:41:a3:3c
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link = eth1
lxc.network.name = eth1
lxc.network.flags = up
lxc.network.hwaddr = 00:ed:03:af:49:d0
lxc.network.ipv4 = 0.0.0.0/25

so on

The host  is Ubuntu 14.04
Your help is appreciated.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu 14 Container Fedora 20 ignoring most NICs

2014-09-05 Thread CDR
Same issue, it only shows eth0 and eth1. no error
I thing the container
I tried to compile LXC in Centos 7, so I could make a  container, but
compilation fails. I already reported the issue to the development
list.
In Ubuntu 14.04, I created a new centos 7 container, fro downloadable
templates, but after a successful creation, it does not start  and it
does not even give me an error message using --logfile
I am caught between a rock and a hard-place, it seems.


On Fri, Sep 5, 2014 at 1:21 PM, Serge Hallyn serge.hal...@ubuntu.com wrote:
 Quoting CDR (vene...@gmail.com):
 I have a Fedora 20 container with 10 nics
  lspci -bv | grep -i ethernet -A1

 If it is a container, then lspci output has nothing to do with what is
 in the container.  If the host has 10 nics and you want to pass them
 into the container, then do so with lxc.network.type = phys entries.


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu 14 Container Fedora 20 ignoring most NICs

2014-09-05 Thread CDR
Same compilation error under Fedora 20.
By any chance do you have a workaround?
I need to replace a physical server and I need a red-hat compatible container.

Checking for unpackaged file(s): /usr/lib/rpm/check-files
/root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha1.fc20.x86_64
error: Installed (but unpackaged) file(s) found:
   /usr/lib/systemd/system/lxc-net.service


RPM build errors:
File listed twice:
/usr/lib64/python3.3/site-packages/_lxc-0.1-py3.3.egg-info
File listed twice: /usr/lib64/python3.3/site-packages/_lxc.cpython-33m.so
File listed twice: /usr/lib64/python3.3/site-packages/lxc/__init__.py
File listed twice: /usr/lib64/python3.3/site-packages/lxc/__pycache__
File listed twice:
/usr/lib64/python3.3/site-packages/lxc/__pycache__/__init__.cpython-33.pyc
File listed twice:
/usr/lib64/python3.3/site-packages/lxc/__pycache__/__init__.cpython-33.pyo
File listed twice: /usr/libexec/lxc/lxc-autostart-helper
File listed twice: /usr/libexec/lxc/lxc-devsetup
File listed twice: /usr/libexec/lxc/lxc-user-nic
Installed (but unpackaged) file(s) found:
   /usr/lib/systemd/system/lxc-net.service
make: *** [rpm] Error 1



On Fri, Sep 5, 2014 at 1:39 PM, CDR vene...@gmail.com wrote:
 Same issue, it only shows eth0 and eth1. no error
 I thing the container
 I tried to compile LXC in Centos 7, so I could make a  container, but
 compilation fails. I already reported the issue to the development
 list.
 In Ubuntu 14.04, I created a new centos 7 container, fro downloadable
 templates, but after a successful creation, it does not start  and it
 does not even give me an error message using --logfile
 I am caught between a rock and a hard-place, it seems.


 On Fri, Sep 5, 2014 at 1:21 PM, Serge Hallyn serge.hal...@ubuntu.com wrote:
 Quoting CDR (vene...@gmail.com):
 I have a Fedora 20 container with 10 nics
  lspci -bv | grep -i ethernet -A1

 If it is a container, then lspci output has nothing to do with what is
 in the container.  If the host has 10 nics and you want to pass them
 into the container, then do so with lxc.network.type = phys entries.


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu 14 Container Fedora 20 ignoring most NICs

2014-09-05 Thread CDR
Good, many thanks
I am using Fedora 20 containers.
The real issue left to solve is the Kernel 3.16 bug, which makes the
host freeze when yo shutdown a container. It does not let go of
network devices.
It does not happen in 3.15.10. I hope somebody is working on it.


On Fri, Sep 5, 2014 at 8:58 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Fri, 2014-09-05 at 19:22 +, Serge Hallyn wrote:
 Quoting CDR (vene...@gmail.com):
  Same issue, it only shows eth0 and eth1. no error
  I thing the container
  I tried to compile LXC in Centos 7, so I could make a  container, but
  compilation fails. I already reported the issue to the development
  list.
  In Ubuntu 14.04, I created a new centos 7 container, fro downloadable
  templates, but after a successful creation, it does not start  and it

 I don't know what the 'downloadable templates' means, the download
 template only supports centos 6.  As I recall centos 7 is a
 problem due to systemd issues which have not been worked out.

 I almost have fixes for the CentOS7 stuff (along with several others)
 ready.  Will be in the early part of next week.  Was integrating the
 kmsg problem and refactoring the password logic since they all center on
 the same template(s).

  does not even give me an error message using --logfile
  I am caught between a rock and a hard-place, it seems.
 
 
  On Fri, Sep 5, 2014 at 1:21 PM, Serge Hallyn serge.hal...@ubuntu.com 
  wrote:
   Quoting CDR (vene...@gmail.com):
   I have a Fedora 20 container with 10 nics
lspci -bv | grep -i ethernet -A1
  
   If it is a container, then lspci output has nothing to do with what is
   in the container.  If the host has 10 nics and you want to pass them
   into the container, then do so with lxc.network.type = phys entries.
  
  
   ___
   lxc-users mailing list
   lxc-users@lists.linuxcontainers.org
   http://lists.linuxcontainers.org/listinfo/lxc-users
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users


 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-top packaging

2014-08-28 Thread CDR
It is impossible to manage a large installation of LXC containers
without lxc-top.
My impression is that the honorable engineers are separated from the
daily struggle of us, business men.

On Thu, Aug 28, 2014 at 12:58 PM, Stéphane Graber stgra...@ubuntu.com wrote:
 I think Dwight mostly wrote lxc-top as an example of what can be done
 using his lua binding, so that's why it's in lua :)

 On Thu, Aug 28, 2014 at 06:51:58PM +0200, Tamas Papp wrote:
 Just out of courious, Why lua and not python?


 On August 28, 2014 4:59:02 PM Stéphane Graber stgra...@ubuntu.com wrote:

 On Thu, Aug 28, 2014 at 04:22:03PM +0200, Tamas Papp wrote:
 
  On 08/28/2014 04:10 PM, Stéphane Graber wrote:
  On Thu, Aug 28, 2014 at 11:15:14AM +0200, Tamas Papp wrote:
  hi All,
  
  Why is it not built and included in the (ppa) package?
  What about the lua bindings?
  lxc-top is included in the daily and daily-stable-1.0 PPAs if you
  install the lua-lxc package.
  
  It's however not in the stable PPA or in the Ubuntu Archive because some
  of its dependencies aren't packaged and therefore had to be directly
  bundled within the package (which is fine for a PPA but isn't policy
  compliant with the Ubuntu archive).
  
 
  Does it mean it will never be in the stable ppa?
 
 The stable PPA is a clean backport of what's in the Ubuntu archive, so
 once we have the missing lua dependencies in Ubuntu itself and can build
 lua-lxc there, then it'll get into stable.
 
 I have however no idea of when that may happen, last I checked, there
 was just a request for package in Debian for lua getopt which when it
 gets done should resolve our problem.
 
 
  t
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 
 --
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com
 
 
 
 --
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 --
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Cannot create a macvlan private bridge on lx

2014-08-13 Thread CDR
Do a real bridge on the host and use it on both the VM and the
container. I do it all the time.

On Wed, Aug 13, 2014 at 1:25 PM, Anjali Kulkarni anj...@juniper.net wrote:
 Thanks - is there any way to do a private bridge between a VM and a
 container, so that they can communicate? What's the use case of using
 macvlan on a real nic?

 Anjali

 On 8/13/14 9:35 AM, Serge Hallyn serge.hal...@ubuntu.com wrote:

You can't do macvlan on a bridge.  It has to be done on an real
physical nic.

Quoting Anjali Kulkarni (anj...@juniper.net):

 Hi,

 We are trying to have a VM and a container ping each other via a private
 bridge (not going through host) via macvlan interface. A bridge, lxcbr1
is
 already created and contains a link from VM, and we want to add
container
 to it as well.
 To do that, on adding the foll. config to a container, the error shown
 below is seen, and tips about how to fix this issue?

 Config:
 lxc.network.type = macvlan
 lxc.network.macvlan.mode = bridge
 lxc.network.flags = down
 lxc.network.name = eth0
 lxc.network.link = lxcbr1
 lxc.network.ipv4 = 1.1.1.1/24


 Error seen:
 lxc-start: failed to move 'lxcbr1' to the container : Invalid argument
 lxc-start: failed to create the configured network
 lxc-start: failed to spawn 'test'
 lxc-start: The container failed to start.
 lxc-start: Additional information can be obtained by setting the
--logfile
 and --log-priority options.

 Thanks
 Anjali






 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Compilation fails under Fedora 20

2014-08-09 Thread CDR
make rpm fails.

Processing files: lxc-debuginfo-1.1.0-0.1.alpha1.fc20.x86_64
Provides: lxc-debuginfo = 1.1.0-0.1.alpha1.fc20 lxc-debuginfo(x86-64)
= 1.1.0-0.1.alpha1.fc20
Requires(rpmlib): rpmlib(FileDigests) = 4.6.0-1
rpmlib(PayloadFilesHavePrefix) = 4.0-1 rpmlib(CompressedFileNames) =
3.0.4-1
Checking for unpackaged file(s): /usr/lib/rpm/check-files
/root/rpmbuild/BUILDROOT/lxc-1.1.0-0.1.alpha1.fc20.x86_64
error: Installed (but unpackaged) file(s) found:
   /usr/lib/systemd/system/lxc-net.service


RPM build errors:
File listed twice:
/usr/lib64/python3.3/site-packages/_lxc-0.1-py3.3.egg-info
File listed twice: /usr/lib64/python3.3/site-packages/_lxc.cpython-33m.so
File listed twice: /usr/lib64/python3.3/site-packages/lxc/__init__.py
File listed twice: /usr/lib64/python3.3/site-packages/lxc/__pycache__
File listed twice:
/usr/lib64/python3.3/site-packages/lxc/__pycache__/__init__.cpython-33.pyc
File listed twice:
/usr/lib64/python3.3/site-packages/lxc/__pycache__/__init__.cpython-33.pyo
File listed twice: /usr/libexec/lxc/lxc-autostart-helper
File listed twice: /usr/libexec/lxc/lxc-devsetup
File listed twice: /usr/libexec/lxc/lxc-user-nic
Installed (but unpackaged) file(s) found:
   /usr/lib/systemd/system/lxc-net.service
make: *** [rpm] Error 1
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to cancel lxc-autostart

2014-08-09 Thread CDR
This is a philosophical divide. I live in the real world, and are
successfully moving  all my business to LXC, or a combination of LXC
and real virtualization, where you have a few virtual machines with
hundreds of GBs of RAM and 36 or more cores, and these super-virtual
machines act solely as container-of-containers. It means that my
virtual machines have so many autostart containers, that it takes 30
minutes to stop them all in a loop. When for some reason I need to
start the machines and do not need all the containers starting, the
only way is to boot in single-user mode. Why? There should be way to
stop the storm in its tracks, like
cat 0  /proc/lxc/autostart
this way I could quickly stop the few containers that had already started.
I see a world coming where every living corporation will be using a
combination of Virtualization plus LXC.
Philip


On Sat, Aug 9, 2014 at 8:27 AM, brian mullan bmullan.m...@gmail.com wrote:
 I've been reading this thread and this is the first and only time I've ever
 heard anyone request such a kill all command for LXC to terminate
 auto-start.

 Developer time is always in short supply and IMHO asking one of them to
 spend their time on such a corner-case issue is not putting their efforts
 to good use.

 There have been 2 alternatives proposed that seem would handle this event
 and my opinion is that should be sufficient.

 LXC 1.x has a lot of important work going on and I'd rather see people
 focused on the existing roadmap or on addressing critical bugs.

 Of course its all Open Source so anyone that can't live without such a
 feature could either contribute the patches themselves or offer a bounty to
 have it done for them.

 again just my opinion

 Brian


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to cancel lxc-autostart

2014-08-09 Thread CDR
That is correct, but why not a command called lxc-cancelautostart?
It seems obvious.


On Sat, Aug 9, 2014 at 10:57 AM, Tom Weber
l_lxc-us...@mail2news.4t2.com wrote:
 Everything is there already. Even in real world.
 you could:
 - define a run level for this purpose
 - delay the autostart
 - run your own script during bootup which asks you wether it should kick
 off the lxc-autostart process or not - it might default to yes after a
 timeout if no input occurs
 - create your own script which would check the grub commandline for a
 nolxcstartup parameter
 ...

 there are plenty of ways which are way better than firing a bullet and
 then requesting a feature to cancel it.
 All of them are rather trivial to implement. Any professional admin to
 host 300 containers should be able to do it. Yet you don't seem to even
 have tried any of these solutions.

   Tom

 Am Samstag, den 09.08.2014, 10:32 -0400 schrieb CDR:
 This is a philosophical divide. I live in the real world, and are
 successfully moving  all my business to LXC, or a combination of LXC
 and real virtualization, where you have a few virtual machines with
 hundreds of GBs of RAM and 36 or more cores, and these super-virtual
 machines act solely as container-of-containers. It means that my
 virtual machines have so many autostart containers, that it takes 30
 minutes to stop them all in a loop. When for some reason I need to
 start the machines and do not need all the containers starting, the
 only way is to boot in single-user mode. Why? There should be way to
 stop the storm in its tracks, like
 cat 0  /proc/lxc/autostart
 this way I could quickly stop the few containers that had already started.
 I see a world coming where every living corporation will be using a
 combination of Virtualization plus LXC.
 Philip


 On Sat, Aug 9, 2014 at 8:27 AM, brian mullan bmullan.m...@gmail.com wrote:
  I've been reading this thread and this is the first and only time I've ever
  heard anyone request such a kill all command for LXC to terminate
  auto-start.
 
  Developer time is always in short supply and IMHO asking one of them to
  spend their time on such a corner-case issue is not putting their efforts
  to good use.
 
  There have been 2 alternatives proposed that seem would handle this event
  and my opinion is that should be sufficient.
 
  LXC 1.x has a lot of important work going on and I'd rather see people
  focused on the existing roadmap or on addressing critical bugs.
 
  Of course its all Open Source so anyone that can't live without such a
  feature could either contribute the patches themselves or offer a bounty to
  have it done for them.
 
  again just my opinion
 
  Brian
 
 
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How to cancel lxc-autostart

2014-08-08 Thread CDR
Dear Friends
I am using Ubuntu 14.04 and LXC latest. When the machine boots I would
like to cancel lxc-autostart, since I have a lot of containers and I
need to fix something first.
Is there a way? If not, maybe we may add a new command for that.
Also, I still cannot install lxc-top, to see what container is eating
at my resources.
Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to cancel lxc-autostart

2014-08-08 Thread CDR
Suppose you manage a box with 300 containers, all on autostart=1. One day
you reboot the box but you need to avoid all the contaoners to start. There
should be a command like
lxc-cancel-autostart.
Does it make sense?


On Friday, August 8, 2014, Harald Dunkel harald.dun...@aixigo.de wrote:

 I am not familiar with Ubuntu's setup, but assuming it supports
 sysv-init I would suggest to omit lxc in a dedicated run level.

 If your default run level is 2 (specified in /etc/inittab), then
 you could use update-rc.d to omit lxc in run level 3, e.g.

 # update-rc.d lxc start 20 2 4 5 . stop 20 0 1 3 6 .

 This means lxc is started in run levels 2, 4 and 5, and
 stopped in 0, 1, 3 and 6.

 If you need to boot without starting the containers, then you
 can choose run level 3 on the kernel command line at boot time,
 e.g.
 linux /boot/vmlinuz root= ... quiet 3

 grub2 allows you to modify the kernel command line before booting.
 Using telinit you can change the run level at run time, e.g.
 'telinit 2' to switch to run level 2 (to start your containers).


 Hope this helps
 Harri

 _he__
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org javascript:;
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to cancel lxc-autostart

2014-08-08 Thread CDR
After I reboot, the LXC service will start automatically all the
containers marked for auto-start. I have not found a way to stop that
mechanism if I absolutely need that the containers to not start.
Suppose I need to umount the partition to issue an fsck, etc. How do
I preempt the automatic behavior?
It is something like the Iron Dome. Hundreds of containers are on the
air and will be started, how do you override this behavior?

On Fri, Aug 8, 2014 at 4:22 PM, Łukasz Górski l.gor...@lokis.info wrote:
 Could you perhaps clarify when exactly do you want this command to be
 invoked? Before the reboot or after? If after, perhaps I am mistaken, but it
 doesn't seem to make any sense whatsoever. I suppose you'd need to run it in
 a specific timeframe before the containers are started - how much time it
 takes from the moment you get a working shell after the reboot to the point
 when container booting process starts? If you want to run it before the
 reboot, then I guess some shell scripting most likely would do to disable
 autostart and the re-enable it back again after reboot.

 Pozdrawiam
 Łukasz Górski
 Biuro Obsługi Klienta LOKIS
 www.lokis.info

 ---
 Informujemy, że realizujemy projekt „Modernizacja serwerów wewnętrznych i
 serwera głównego wraz z konfiguracją”.
 Nr wniosku o dofinansowanie: WND-RPPM – 01. – 01.-00 - 363 /08. Projekt jest
 realizowany przy współudziale środków z EFRR oraz budżetu państwa.
 Więcej informacji o wniosku i konkursach na stronach www.arp.gda.pl



 2014-08-08 21:21 GMT+02:00 CDR vene...@gmail.com:

 Suppose you manage a box with 300 containers, all on autostart=1. One day
 you reboot the box but you need to avoid all the contaoners to start. There
 should be a command like
 lxc-cancel-autostart.
 Does it make sense?


 On Friday, August 8, 2014, Harald Dunkel harald.dun...@aixigo.de wrote:

 I am not familiar with Ubuntu's setup, but assuming it supports
 sysv-init I would suggest to omit lxc in a dedicated run level.

 If your default run level is 2 (specified in /etc/inittab), then
 you could use update-rc.d to omit lxc in run level 3, e.g.

 # update-rc.d lxc start 20 2 4 5 . stop 20 0 1 3 6 .

 This means lxc is started in run levels 2, 4 and 5, and
 stopped in 0, 1, 3 and 6.

 If you need to boot without starting the containers, then you
 can choose run level 3 on the kernel command line at boot time,
 e.g.
 linux /boot/vmlinuz root= ... quiet 3

 grub2 allows you to modify the kernel command line before booting.
 Using telinit you can change the run level at run time, e.g.
 'telinit 2' to switch to run level 2 (to start your containers).


 Hope this helps
 Harri

 _he__

 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users


 ___

 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Centos-7 x86_64 container - systemd-journald process eat CPU

2014-07-30 Thread CDR
I support a bugzilla for LXC. Also, if anybody can analyze it
properly, what is the difference between docker and LXC?
Philip

On Wed, Jul 30, 2014 at 6:32 AM, mxs kolo koloma...@gmail.com wrote:
 Hi
 AFA CentOS 7 in general goes - I have not yet had the time to
 incorporate the systemd logic from the Fedora template into the CentOS
 template.  I just got back from being out of pocket for the last three
 weeks (and still completely not over being 6 hours jet lagged) and I'm
 just starting to look at it.
 God news for Centos-7 users, and probably You can check another Centos
 bugs in LXC reported by me:
 1) problem with install Centos from templates and backstore LVM:
   template accept in getopt() variable $rootfs but everywhere inside
 use variable $rootfs_path (centos6 and 7). As result - rsync try copy
 files on raw device /dev/LG/Lname
 2) On ce-7 cgroup mounted inside container from hardware node and
 probably because limits not work inside container
 3) sometime agetty not finished after stop container and start eat CPU.
 And moreover - hardware node(!)  show on console prompt from Ce-7 LXC
 container. If get time, I make additional test for detect conditions
 where such interference occurred.

 p.s.
  and may be LXC project has own bugzilla ?

 b.r.
  Maxim Kozin
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Centos-7 x86_64 container - systemd-journald process eat CPU

2014-07-30 Thread CDR
I have had great success with Fedora 20 templates. My scheme in
production is: ubuntu 14.04 as server and Fedora 20 as container.

On Wed, Jul 30, 2014 at 7:16 AM, Michael H. Warfield m...@wittsend.com wrote:
 On Wed, 2014-07-30 at 14:32 +0400, mxs kolo wrote:
 Hi
  AFA CentOS 7 in general goes - I have not yet had the time to
  incorporate the systemd logic from the Fedora template into the CentOS
  template.  I just got back from being out of pocket for the last three
  weeks (and still completely not over being 6 hours jet lagged) and I'm
  just starting to look at it.
 God news for Centos-7 users, and probably You can check another Centos
 bugs in LXC reported by me:
 1) problem with install Centos from templates and backstore LVM:
   template accept in getopt() variable $rootfs but everywhere inside
 use variable $rootfs_path (centos6 and 7). As result - rsync try copy
 files on raw device /dev/LG/Lname

 That's another issue that needs to be addressed.  I already intend to
 review what's been done vis-a-vis the rootfs_path variable in the
 download template and probably retrofit that into the CentOS, Fedora,
 and OpenSuse templates.

 2) On ce-7 cgroup mounted inside container from hardware node and
 probably because limits not work inside container

 I'm deferring that until the systemd work is integrated.  I suspect that
 will become a non-issue since it's a not an issue with Fedora, which
 already has that rework incorporated.

 3) sometime agetty not finished after stop container and start eat CPU.
 And moreover - hardware node(!)  show on console prompt from Ce-7 LXC
 container. If get time, I make additional test for detect conditions
 where such interference occurred.

 Same remark.  Some of the systemd rework includes setting up the
 [am]getty services.

 p.s.
  and may be LXC project has own bugzilla ?

 b.r.
  Maxim Kozin

 Regards,
 Mike
 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Centos 7

2014-07-18 Thread CDR
Dear friends
I moved a working container from Ubuntu 14.04 to Centos 7, and
networking does not work.
Previously, I had compiled and installed LXC from git, make rpm:

rpm -qa | grep lxc
lxc-debuginfo-1.1.0-0.1.alpha1.el7.centos.x86_64
lxc-1.1.0-0.1.alpha1.el7.centos.x86_64
lxc-libs-1.1.0-0.1.alpha1.el7.centos.x86_64
lxc-devel-1.1.0-0.1.alpha1.el7.centos.x86_64
libvirt-daemon-driver-lxc-1.1.1-29.el7.x86_64

The container's network definition is

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth0
lxc.network.name = eth0
lxc.network.flags = up
lxc.network.hwaddr = 00:D0:41:7B:B7:D5
lxc.network.ipv4 = 0.0.0.0/24

I checked with lxc-checkconfig and everything is green

Also, lxc.autodev = 1

Since Centos 7 is going to be the most popular free OS soon, we should
make this work.

Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Issue with lxc-top command in Fedora

2014-07-04 Thread CDR
How do I install lxc-top? I am using Ubuntu server 14.04, and lxc
1.04, from ppa.
It is not there, but I could imagine a more useful application.
Philip

On Fri, Jul 4, 2014 at 8:37 AM, Thomas Moschny thomas.mosc...@gmail.com wrote:
 2014-07-04 6:48 GMT+02:00 Ajith Adapa ajith.ad...@gmail.com:
 # lxc-top -h
 lua: /usr/bin/lxc-top:24: module 'lxc' not found:

 Have you installed lua-lxc? We might have a missing dependency there.

 - Thomas
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container not started

2014-06-19 Thread CDR
I disable selinux in the kernel line, for all my boxes.
I also disable apparmor in Ubuntu servers, using the kernel line.
This issue is difficult to explain.
Any way, I already erased the Fedora virtual machine and installed an
Ubuntu virtual machine, and I am in production.
But there is a big issue here, hidden.
If somebody has a Fedora 20 virtual machine, and wants to reproduce
it, I am more than   happy to upload my container. It contains no
proprietary code


On Thu, Jun 19, 2014 at 11:11 AM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-06-17 at 09:42 -0400, CDR wrote:
 I already created a new Ubuntu Host and the container works fine.
 The question is: We have been living under the assumption that
 containers act like virtual machines, you may move them from host to
 host.
 It is not the case, I can see. A Fedora 20 container created under
 Ubuntu will never start under a Fedora 20 host.
 In my opinion this is a big flaw. Containers built by libvirt are
 truly portable, I have already verified that.
 I think we should fix this.

 Couple of things to check and try.  I just ran into a nasty corner case
 with Ubuntu running on a Fedora 20 host where the Fedora 20 system was
 in selinux permissive mode and caused all kinds of grief in the Ubuntu
 Trusty container.

 Since your problem is Fedora on Fedora, check your selinux settings in
 the host and in the container...

 /etc/selinux/config
 /selinux/enforcing

 If they are NOT the same between host and container, make them the same and 
 retest.

 If your host is in enforcing mode or permissive mode, try switching
 it to disabled.  The Fedora template sets up containers set to
 disabled by default.  I cringe at making that a recommendation but we
 should, at least, test at that level.

 Regards,
 Mike

 On Tue, Jun 17, 2014 at 9:23 AM, Serge Hallyn serge.hal...@ubuntu.com 
 wrote:
  Quoting CDR (vene...@gmail.com):
  I copied an LXC container fro Ubuntu Server to Fedora 20 and when I
  start it I get
  xc-start -n masterfe
  systemd 208 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX
  +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
  Detected virtualization 'lxc'.
 
  Welcome to Fedora 20 (Heisenbug)!
 
  Set hostname to masterfe.
  No control group support available, not creating root group.
  [  OK  ] Reached target Remote File Systems.
  Socket service systemd-journald.service not loaded, refusing.
  [FAILED] Failed to listen on Journal Socket.
 
  Two suggestions for investigating:
 
  1. create a new fedora container on the ubuntu host, see if it
  has the same behavior.
 
  2. Look at the systemd source and see under what conditions the two
  lines above occur.
 
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Container not started

2014-06-16 Thread CDR
I copied an LXC container fro Ubuntu Server to Fedora 20 and when I
start it I get
xc-start -n masterfe
systemd 208 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX
+IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
Detected virtualization 'lxc'.

Welcome to Fedora 20 (Heisenbug)!

Set hostname to masterfe.
No control group support available, not creating root group.
[  OK  ] Reached target Remote File Systems.
Socket service systemd-journald.service not loaded, refusing.
[FAILED] Failed to listen on Journal Socket.
See 'systemctl status systemd-journald.socket' for details.
 Mounting RPC Pipe File System...
Caught SEGV, dumped core as pid 13.
Freezing execution.

if I check this eror message
systemctl status systemd-journald.socket
systemd-journald.socket - Journal Socket
   Loaded: loaded (/usr/lib/systemd/system/systemd-journald.socket; static)
   Active: active (running) since Mon 2014-06-16 16:19:14 EDT; 3h 31min ago
 Docs: man:systemd-journald.service(8)
   man:journald.conf(5)
   Listen: /run/systemd/journal/stdout (Stream)
   /run/systemd/journal/socket (Datagram)
   /dev/log (Datagram)



And the kernel's configuration is right


 lxc-checkconfig
Kernel configuration found at /boot/config-3.14.7-200.fc20.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

This is my config, exceot networking

lxc.start.auto = 0
lxc.start.delay = 5
lxc.start.order = 1

lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0

lxc.mount.entry = /usr/src /usr/src none bind 0 0


lxc.tty = 4
lxc.pts = 1024
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rwm
lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
lxc.cgroup.devices.allow = b 7:* rwm# loop*
#lxc.cgroup.memory.limit_in_bytes = 2536870910
lxc.mount.auto = cgroup

lxc.utsname = masterfe
lxc.autodev = 1
#lxc.aa_profile = unconfined


lxc.rootfs = /var/lib/lxc/masterfe/rootfs
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC 1.0.4 has been released!

2014-06-13 Thread CDR
Thanks
I did not mean to be rude, but I apologize.
You mentioned massive changes, care to elaborate?


On Fri, Jun 13, 2014 at 8:40 PM, Stéphane Graber stgra...@ubuntu.com wrote:
 You are always welcome to switch to another distro :)

 For Ubuntu, we care about shipping well tested software and not causing
 regressions when pushing post-release updates, as a result every update
 has to wait at least a week in staging before it's pushed to the rest of
 our users. As LXC 1.0.4 is a pretty massive change, the code review
 required for it to even enter that staging area will also take a while.

 On Fri, Jun 13, 2014 at 08:28:16PM -0400, CDR wrote:
 I am worried about the few weeks part. If I want a slow/updating
 server software, I could stay with Scientific Linux.

 On Fri, Jun 13, 2014 at 8:15 PM, J Bc jav...@esdebian.org wrote:
  Good work. thank you.
 
  2014-06-13 19:57 GMT+02:00 Stéphane Graber stgra...@ubuntu.com:
  Ubuntu will get 1.0.4 in the next few weeks, no need to manually update 
  it.
 
  On Fri, Jun 13, 2014 at 01:35:52PM -0400, CDR wrote:
  I am using ubuntu server 14.04 and the lxc repository, should I wait
  until this is updated or should I go ahead and manually update it?
 
  On Fri, Jun 13, 2014 at 1:23 PM, Stéphane Graber stgra...@ubuntu.com 
  wrote:
   Doh, just noticed I forgot to change the subject line to actually say
   1.0.4 instead of 1.0.3 ...
  
   On Fri, Jun 13, 2014 at 01:17:50PM -0400, Stéphane Graber wrote:
   Hello everyone,
  
   The fourth LXC 1.0 bugfix release is now out!
  
   This includes over two months worth of bugfixes contributed by 14
   individual developers.
  
  
   As usual, the full announcement and changelog may be found at:
   https://linuxcontainers.org/news/
  
   And our tarballs can be downloaded from:
   https://linuxcontainers.org/downloads/
  
  
   As a reminder, LXC upstream is planning on maintaining the LXC 1.0
   branch with frequent bugfix and security updates until April 2019.
  
  
   Stéphane Graber
   On behalf of the LXC development team
  
  
   --
   Stéphane Graber
   Ubuntu developer
   http://www.ubuntu.com
  
   ___
   lxc-users mailing list
   lxc-users@lists.linuxcontainers.org
   http://lists.linuxcontainers.org/listinfo/lxc-users
  ___
  Containers mailing list
  contain...@lists.linux-foundation.org
  https://lists.linuxfoundation.org/mailman/listinfo/containers
 
  --
  Stéphane Graber
  Ubuntu developer
  http://www.ubuntu.com
 
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

 --
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Ubuntu Server errors on Fedora 20 container start

2014-06-07 Thread CDR
I moved a Fedora 20 privileged container from one server to another using
rsync -qarlpt --sparse

and now when the container starts I get the messages below:

Failed to insert module 'autofs4'
Set hostname to fedora-1.
Failed to install release agent, ignoring: File exists
Socket service systemd-journald.service not loaded, refusing.
[FAILED] Failed to listen on Journal Socket.
See 'systemctl status systemd-journald.socket' for details.
 Mounting RPC Pipe File System...
 Mounting RPC Pipe File System...
Failed to open /dev/autofs: No such file or directory
Failed to initialize automounter: No such file or directory
[FAILED] Failed to set up automount Arbitrary Executable File Formats
File System Automount Point.
See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
Unit proc-sys-fs-binfmt_misc.automount entered failed state.

systemd-journal-flush.service: main process exited, code=exited,
status=1/FAILURE
[FAILED] Failed to start Trigger Flushing of Journal to Persistent Storage.


38systemd-logind[80]: New seat seat0.
36systemd-logind[80]: Failed to open event0: No such file or directory
27systemd-udevd[39]: inotify_add_watch(7, /dev/loop3, 10) failed: No
such file or directory
27systemd-udevd[37]: 27inotify_add_watch(7, /dev/loop1, 10)
failed: No such file or directory
systemd-udevd27systemd-udevd[35]: inotify_add_watch(7, /dev/ram10,
10) failed: No such file or directory
[49]: inotify_add_watch(7, /dev/ram15, 10) failed: No such file or directory
2727systemd-udevd[57]: inotify_add_watch(7, /dev/ram9, 10) failed:
No such file or directory
systemd-udevd[48]: inotify_add_watch(7, /dev/ram13, 10) failed: No
such file or directory
27systemd-udevd[50]: inotify_add_watch(7, /dev/ram2, 10) failed: No
such file or directory
27systemd-udevd[40]: 27systemd-udevd[34]: inotify_add_watch(7,
/dev/ram11, 10) failed: No such file or directoryinotify_add_watch(7,
/dev/loop4, 10) failed: No such file or directory
27
27systemd-udevd[53]: inotify_add_watch(7, /dev/ram5, 10) failed: No
such file or directory
systemd-udevd[52]: inotify_add_watch(7, /dev/ram4, 10) failed: No such
file or directory
27systemd-udevd[56]: inotify_add_watch(7, /dev/ram8, 10) failed: No
such file or directory
27systemd-udevd[38]: inotify_add_watch(7, /dev/loop2, 10) failed: No
such file or directory
27systemd-udevd27systemd-udevd[43]: inotify_add_watch(7,
/dev/loop7, 10) failed: No such file or directory
27systemd-udevd[46]: inotify_add_watch(7, /dev/ram1, 10) failed: No
such file or directory
[33]: inotify_add_watch(7, /dev/loop0, 10) failed: No such file or directory
2727systemd-udevd[42]: inotify_add_watch(7, /dev/loop6, 10)
failed: No such file or directory
systemd-udevd[51]: inotify_add_watch(7, /dev/ram3, 10) failed: No such
file or directory
2727systemd-udevd[47]: inotify_add_watch(7, /dev/ram12, 10)
failed: No such file or directory
systemd-udevd[54]: inotify_add_watch(7, /dev/ram6, 10) failed: No such
file or directory
27systemd-udevd[44]: inotify_add_watch(7, /dev/ram0, 10) failed: No
such file or directory
27systemd-udevd[55]: inotify_add_watch(7, /dev/ram7, 10) failed: No
such file or directory
27systemd-udevd[36]: inotify_add_watch(7, /dev/ram14, 10) failed: No
such file or directory
27systemd-udevd[41]: inotify_add_watch(7, /dev/loop5, 10) failed: No
such file or directory
27systemd-udevd[36]: Failed to apply ACL on /dev/kvm: No such file
or directory

Any idea what may be causing this?
This is the config file, except the network definitions
lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0
lxc.mount.entry = /usr/src /var/lib/lxc/fedora-1/rootfs/usr/src none bind 0 0
lxc.mount.auto = cgroup:mixed
lxc.tty = 4
lxc.pts = 1024
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rwm
lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
lxc.cgroup.devices.allow = b 7:* rwm# loop*
lxc.cgroup.memory.limit_in_bytes = 2536870910
lxc.utsname = fedora-1
lxc.rootfs = /var/lib/lxc/fedora-1/rootfs
lxc.start.auto = 1
lxc.start.delay = 5
lxc.start.order = 1



Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu Server errors on Fedora 20 container start

2014-06-07 Thread CDR
Both are Ubuntu servers 14.04 but the container was created in Fedora
20 with LXC 1.0.3, and moved to ubuntu.
Same version of kernel.
I disabled apparmor at the kernel line in Grub. This an internal app,
so no security is needed.

Philip


On Sat, Jun 7, 2014 at 10:43 AM, Michael H. Warfield m...@wittsend.com wrote:
 On Sat, 2014-06-07 at 08:19 -0400, CDR wrote:
 I moved a Fedora 20 privileged container from one server to another using
 rsync -qarlpt --sparse

 Were they both Ubuntu servers with the same kernel rev and did you copy
 the configuration over as well?  Same version of LXC on both servers?
 What version LXC?

 and now when the container starts I get the messages below:

 Failed to insert module 'autofs4'
 Set hostname to fedora-1.
 Failed to install release agent, ignoring: File exists
 Socket service systemd-journald.service not loaded, refusing.
 [FAILED] Failed to listen on Journal Socket.
 See 'systemctl status systemd-journald.socket' for details.
  Mounting RPC Pipe File System...
  Mounting RPC Pipe File System...
 Failed to open /dev/autofs: No such file or directory
 Failed to initialize automounter: No such file or directory
 [FAILED] Failed to set up automount Arbitrary Executable File Formats
 File System Automount Point.
 See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
 Unit proc-sys-fs-binfmt_misc.automount entered failed state.

 systemd-journal-flush.service: main process exited, code=exited,
 status=1/FAILURE
 [FAILED] Failed to start Trigger Flushing of Journal to Persistent Storage.


 38systemd-logind[80]: New seat seat0.
 36systemd-logind[80]: Failed to open event0: No such file or directory
 27systemd-udevd[39]: inotify_add_watch(7, /dev/loop3, 10) failed: No
 such file or directory
 27systemd-udevd[37]: 27inotify_add_watch(7, /dev/loop1, 10)
 failed: No such file or directory
 systemd-udevd27systemd-udevd[35]: inotify_add_watch(7, /dev/ram10,
 10) failed: No such file or directory
 [49]: inotify_add_watch(7, /dev/ram15, 10) failed: No such file or directory
 2727systemd-udevd[57]: inotify_add_watch(7, /dev/ram9, 10) failed:
 No such file or directory
 systemd-udevd[48]: inotify_add_watch(7, /dev/ram13, 10) failed: No
 such file or directory
 27systemd-udevd[50]: inotify_add_watch(7, /dev/ram2, 10) failed: No
 such file or directory
 27systemd-udevd[40]: 27systemd-udevd[34]: inotify_add_watch(7,
 /dev/ram11, 10) failed: No such file or directoryinotify_add_watch(7,
 /dev/loop4, 10) failed: No such file or directory
 27
 27systemd-udevd[53]: inotify_add_watch(7, /dev/ram5, 10) failed: No
 such file or directory
 systemd-udevd[52]: inotify_add_watch(7, /dev/ram4, 10) failed: No such
 file or directory
 27systemd-udevd[56]: inotify_add_watch(7, /dev/ram8, 10) failed: No
 such file or directory
 27systemd-udevd[38]: inotify_add_watch(7, /dev/loop2, 10) failed: No
 such file or directory
 27systemd-udevd27systemd-udevd[43]: inotify_add_watch(7,
 /dev/loop7, 10) failed: No such file or directory
 27systemd-udevd[46]: inotify_add_watch(7, /dev/ram1, 10) failed: No
 such file or directory
 [33]: inotify_add_watch(7, /dev/loop0, 10) failed: No such file or directory
 2727systemd-udevd[42]: inotify_add_watch(7, /dev/loop6, 10)
 failed: No such file or directory
 systemd-udevd[51]: inotify_add_watch(7, /dev/ram3, 10) failed: No such
 file or directory
 2727systemd-udevd[47]: inotify_add_watch(7, /dev/ram12, 10)
 failed: No such file or directory
 systemd-udevd[54]: inotify_add_watch(7, /dev/ram6, 10) failed: No such
 file or directory
 27systemd-udevd[44]: inotify_add_watch(7, /dev/ram0, 10) failed: No
 such file or directory
 27systemd-udevd[55]: inotify_add_watch(7, /dev/ram7, 10) failed: No
 such file or directory
 27systemd-udevd[36]: inotify_add_watch(7, /dev/ram14, 10) failed: No
 such file or directory
 27systemd-udevd[41]: inotify_add_watch(7, /dev/loop5, 10) failed: No
 such file or directory
 27systemd-udevd[36]: Failed to apply ACL on /dev/kvm: No such file
 or directory

 Any idea what may be causing this?
 This is the config file, except the network definitions
 lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
 lxc.mount.entry = sysfs sys sysfs defaults  0 0
 lxc.mount.entry = /usr/src /var/lib/lxc/fedora-1/rootfs/usr/src none bind 0 0
 lxc.mount.auto = cgroup:mixed
 lxc.tty = 4
 lxc.pts = 1024
 lxc.cgroup.devices.deny = a
 lxc.cgroup.devices.allow = c 1:3 rwm
 lxc.cgroup.devices.allow = c 1:5 rwm
 lxc.cgroup.devices.allow = c 5:1 rwm
 lxc.cgroup.devices.allow = c 5:0 rwm
 lxc.cgroup.devices.allow = c 4:0 rwm
 lxc.cgroup.devices.allow = c 4:1 rwm
 lxc.cgroup.devices.allow = c 1:9 rwm
 lxc.cgroup.devices.allow = c 1:8 rwm
 lxc.cgroup.devices.allow = c 136:* rwm
 lxc.cgroup.devices.allow = c 5:2 rwm
 lxc.cgroup.devices.allow = c 254:0 rwm
 lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
 lxc.cgroup.devices.allow = b 7:* rwm# loop*
 lxc.cgroup.memory.limit_in_bytes = 2536870910
 lxc.utsname = fedora-1
 lxc.rootfs

Re: [lxc-users] Fedory 20 LXC fails to start on Ubuntu 14.04 host?

2014-05-24 Thread CDR
I am using a Fedora container in production since a few days ago,
created with LXC 1.0.3. No problems whatsoever. My environment is
Ubuntu server 1404.
 dpkg --list | grep -i lxc
ii  liblxc11.0.3-0ubuntu3
 amd64Linux Containers userspace tools (library)
ii  lxc1.0.3-0ubuntu3
 amd64Linux Containers userspace tools
ii  lxc-templates  1.0.3-0ubuntu3
 amd64Linux Containers userspace tools (templates)
ii  python3-lxc1.0.3-0ubuntu3
 amd64Linux Containers userspace tools (Python 3.x
bindings)



On Sat, May 24, 2014 at 9:49 PM, Robert Pendell
shi...@elite-systems.org wrote:
 On Sat, May 24, 2014 at 9:29 PM, Michael H. Warfield wrote:
 On Sat, 2014-05-24 at 22:00 +0200, Timotheus Pokorra wrote:
 Hello Mike,

  1) Are you running this container unprivileged?
 I checked what it means to run a container unprivileged. I think I run
 it privileged, I am logged in as root on the host machine, and I am
 just trying to start with lxc-start -n myFedora.

  2) Have you tried creating the container using the -t fedora template?
 I tried lxc-create -t fedora -n myFedoraTest
 Unfortunately, the result is the same.

  Anyone any ideas?
 
  The error the OP was showing was a SEGV (11) in systemd.  He did not
  specify how he created the container, or how he was running it (priv /
  non-priv).  A SEGV in systemd would be pretty serious.  It would seem to
  be an executable conflict at a pretty deep layer.  I guess it would also
  be good to know what the host kernel version is as well.
 I indeed get the exact same output as the OP.

 On the host:
 uname -a
 Linux j80074.servers.jiffybox.net 3.2.0-60-virtual #91-Ubuntu SMP Wed
 Feb 19 04:13:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

 The LXC host (Ubuntu) is a virtual machine running in a XEN environment.
 I would understand if that is not possible, but it is possible since
 Debian 7 and CentOS 6 containers run fine on this host.

 XEN???

 Oh crap...  It's information like this that is critical to understand
 what's going on.

 You're in an environment with a Fedora 20 container running on an Ubuntu
 virtualized host in a Xen guest running under a Xen paravirtualization
 hypervisor.  Without knowing this, it would be impossible to even guess
 where the problem may lay (even with this information, it may be
 impossible).  I haven't even begun to attempt to reproduce it but the
 number of independent variables just shot through the roof.

 First order of troubleshooting.  Eliminate independent variables...

 Have you attempted running a Fedora container on an Ubuntu host running
 on raw iron?  If not, you need to do so and report those results.

 I haven't screwed with Xen in years but all HW and para virtualization
 requires some instruction emulation back in the hypervisor.  This could
 easily be some incompatibility between the Xen hypervisor in supervisory
 state and emulating some instruction that systemd is requiring.  I can't
 even begin to reproduce your environment at this point with Xen in the
 loop.  You really need to simplify this into a basic install with basic
 containers and try running it that way.  This could be a problem in the
 Xen hypervisor, it could be a problem in the Xen guest virt drivers, it
 could be in systemd that never expected to run in a container in a guest
 under Xen.  I can't tell.

 In the upcoming week, I'll look into firing up an Ubuntu server, since I
 now have a free Dell tower now that I've virtualized my NST development
 engines into LXC containers.  I don't even want to THINK about doing
 Xen.

 You've got to simplify that environment in order to isolate the origin
 of the problem.


 I took a try at this earlier and it worked fine.  I did a full install
 and boot for Fedora 20 amd64 using lxc-create -t download -n test as
 root.  Here is my environment.

 Host: Linode
 Kernel: 3.14.3 (host supplied)
 Technology: Xen Paravirtualized

 Xen Hypervisor Mode (HVM) shouldn't be much different than KVM however
 I have not used each of them enough to know for sure.
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Fedory 20 LXC fails to start on Ubuntu 14.04 host?

2014-05-24 Thread CDR
Hard iron.
But containers should be transparent for him, since LXC does not use
any real virtualization.
As long as his kernel is the right version.

On Sat, May 24, 2014 at 10:50 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Sat, 2014-05-24 at 22:12 -0400, CDR wrote:
 I am using a Fedora container in production since a few days ago,
 created with LXC 1.0.3. No problems whatsoever. My environment is
 Ubuntu server 1404.
  dpkg --list | grep -i lxc
 ii  liblxc11.0.3-0ubuntu3
  amd64Linux Containers userspace tools (library)
 ii  lxc1.0.3-0ubuntu3
  amd64Linux Containers userspace tools
 ii  lxc-templates  1.0.3-0ubuntu3
  amd64Linux Containers userspace tools (templates)
 ii  python3-lxc1.0.3-0ubuntu3
  amd64Linux Containers userspace tools (Python 3.x
 bindings)

 Are you running under XEN or hard iron?  He's running under XEN.  This
 could be another point on the curve.

 Thanks

 Regards,
 Mike

 On Sat, May 24, 2014 at 9:49 PM, Robert Pendell
 shi...@elite-systems.org wrote:
  On Sat, May 24, 2014 at 9:29 PM, Michael H. Warfield wrote:
  On Sat, 2014-05-24 at 22:00 +0200, Timotheus Pokorra wrote:
  Hello Mike,
 
   1) Are you running this container unprivileged?
  I checked what it means to run a container unprivileged. I think I run
  it privileged, I am logged in as root on the host machine, and I am
  just trying to start with lxc-start -n myFedora.
 
   2) Have you tried creating the container using the -t fedora template?
  I tried lxc-create -t fedora -n myFedoraTest
  Unfortunately, the result is the same.
 
   Anyone any ideas?
  
   The error the OP was showing was a SEGV (11) in systemd.  He did not
   specify how he created the container, or how he was running it (priv /
   non-priv).  A SEGV in systemd would be pretty serious.  It would seem 
   to
   be an executable conflict at a pretty deep layer.  I guess it would 
   also
   be good to know what the host kernel version is as well.
  I indeed get the exact same output as the OP.
 
  On the host:
  uname -a
  Linux j80074.servers.jiffybox.net 3.2.0-60-virtual #91-Ubuntu SMP Wed
  Feb 19 04:13:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 
  The LXC host (Ubuntu) is a virtual machine running in a XEN environment.
  I would understand if that is not possible, but it is possible since
  Debian 7 and CentOS 6 containers run fine on this host.
 
  XEN???
 
  Oh crap...  It's information like this that is critical to understand
  what's going on.
 
  You're in an environment with a Fedora 20 container running on an Ubuntu
  virtualized host in a Xen guest running under a Xen paravirtualization
  hypervisor.  Without knowing this, it would be impossible to even guess
  where the problem may lay (even with this information, it may be
  impossible).  I haven't even begun to attempt to reproduce it but the
  number of independent variables just shot through the roof.
 
  First order of troubleshooting.  Eliminate independent variables...
 
  Have you attempted running a Fedora container on an Ubuntu host running
  on raw iron?  If not, you need to do so and report those results.
 
  I haven't screwed with Xen in years but all HW and para virtualization
  requires some instruction emulation back in the hypervisor.  This could
  easily be some incompatibility between the Xen hypervisor in supervisory
  state and emulating some instruction that systemd is requiring.  I can't
  even begin to reproduce your environment at this point with Xen in the
  loop.  You really need to simplify this into a basic install with basic
  containers and try running it that way.  This could be a problem in the
  Xen hypervisor, it could be a problem in the Xen guest virt drivers, it
  could be in systemd that never expected to run in a container in a guest
  under Xen.  I can't tell.
 
  In the upcoming week, I'll look into firing up an Ubuntu server, since I
  now have a free Dell tower now that I've virtualized my NST development
  engines into LXC containers.  I don't even want to THINK about doing
  Xen.
 
  You've got to simplify that environment in order to isolate the origin
  of the problem.
 
 
  I took a try at this earlier and it worked fine.  I did a full install
  and boot for Fedora 20 amd64 using lxc-create -t download -n test as
  root.  Here is my environment.
 
  Host: Linode
  Kernel: 3.14.3 (host supplied)
  Technology: Xen Paravirtualized
 
  Xen Hypervisor Mode (HVM) shouldn't be much different than KVM however
  I have not used each of them enough to know for sure.
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http

Re: [lxc-users] We need a lxc-top utility

2014-05-22 Thread CDR
sudo apt-get install lua-lxc
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package lua-lxc

Is there a way to add lxc-top to the regular LXC repository?
Yours
Federico


On Wed, May 21, 2014 at 11:16 AM, Fajar A. Nugraha l...@fajar.net wrote:
 On Wed, May 21, 2014 at 9:55 PM, Stéphane Graber stgra...@ubuntu.com wrote:
 On Wed, May 21, 2014 at 01:56:54PM +, Serge Hallyn wrote:
 Quoting CDR (vene...@gmail.com):
  Wrong, that RPM was in Fedora, in Ubuntu I connected to a repository.
  But lxc-top is not there.
  How do I get that utility?

 sudo apt-get install lua-lxc

 Yeah that really should be more discoverable...

 Note that this only works with the PPA builds, not with what's in the
 distro as lxc-top or the binding (can't remember) depends on a lua
 module which isn't packaged.

 Did you mean alt_getopt.lua?

 For the PPA I just bundle it in the
 packaging but this hack isn't suitable for the distro itself, so as long
 as it's not packaged we won't be able to ship lxc-top by default in
 Ubuntu.

 texlive-base includes it on its own private path, and it's on main section

 http://packages.ubuntu.com/search?mode=exactfilenamesuite=trustysection=allarch=anykeywords=alt_getopt.luasearchon=contents
 /usr/share/texlive/texmf-dist/scripts/lua-alt-getopt/alt_getopt.lua

 Is there a particular reason it's allowed to do so while lxc isn't?

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] We need a lxc-top utility

2014-05-21 Thread CDR
I did my lxc build with RPM
make rpm, and it did not built it.
What are the steps?
I am using Ubuntu Server
Philip


On Wed, May 21, 2014 at 7:50 AM, Steven Jan Springl
ste...@springl.co.uk wrote:
 On Wednesday 21 May 2014 01:07:39 CDR wrote:
 Dear Friends
 I have 20+ containers with the same programs running. All of them are
 cpu-intensive. But one of them is eating way more CPU than the
 average. With top I have no idea which container owns that
 program. Perhaps we need a new lxc-top that would identify the
 process and the container, and maybe allow to sort by container-cpu or
 memory, or show cpu-container, memory-container, etc.

 Philip
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

 Philip

 I use htop to display the cgroup (container name) against each program.

 It isn't displayed by default, you need to go into to f2 setup to add it.

 Steven.
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Configuration not being refreshed on reboot

2014-05-21 Thread CDR
Dear Friends
I came upon a bug that needs to be addressed
Suppose you have a container with a network like this

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br20
lxc.network.hwaddr = 92:ea:2b:24:e0:27
lxc.network.ipv4 = 0.0.0.0/24

The container is UP, then you decide to change the configuration to
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br21 ### changed
lxc.network.hwaddr = 92:ea:2b:24:e0:27
lxc.network.ipv4 = 0.0.0.0/24

and then you issue a reboot from inside the container. The new
configuration is not read.
The only way to shutdown and start. It should not be the case.
I am using Ubuntu server

dpkg --list | grep lxc

ii  liblxc11.0.3-0ubuntu3
 amd64Linux Containers userspace tools (library)
ii  lxc1.0.3-0ubuntu3
 amd64Linux Containers userspace tools
ii  lxc-templates  1.0.3-0ubuntu3
 amd64Linux Containers userspace tools (templates)
ii  python3-lxc1.0.3-0ubuntu3
 amd64Linux Containers userspace tools (Python 3.x
bindings)
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] We need a lxc-top utility

2014-05-20 Thread CDR
Dear Friends
I have 20+ containers with the same programs running. All of them are
cpu-intensive. But one of them is eating way more CPU than the
average. With top I have no idea which container owns that
program. Perhaps we need a new lxc-top that would identify the
process and the container, and maybe allow to sort by container-cpu or
memory, or show cpu-container, memory-container, etc.

Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Fedora container thinks it is not running

2014-05-15 Thread CDR
The container is started, because I am inside it via ssh
but I cannot use its console
lxc-console -n msterfe
msterfe is not running

I am uploading the configuration as an attachment
The container was created from the template, LXC 1.0.3
 dpkg --list | grep lxc
ii  liblxc11.0.3-0ubuntu3
 amd64Linux Containers userspace tools (library)
ii  lxc1.0.3-0ubuntu3
 amd64Linux Containers userspace tools
ii  lxc-templates  1.0.3-0ubuntu3
 amd64Linux Containers userspace tools (templates)
ii  python3-lxc1.0.3-0ubuntu3
 amd64Linux Containers userspace tools (Python 3.x
bindings)
lxc.start.auto = 1
lxc.start.delay = 5
lxc.start.order = 1

lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0

#demo it works
#lxc.mount.entry = /what  /var/lib/lxc/masterfe/rootfs/where none bind 0 0
#very important this is the asterisk sounds
lxc.mount.entry = /usr/src /var/lib/lxc/masterfe/rootfs/usr/src none bind 0 0


lxc.tty = 4
lxc.pts = 1024
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 254:0 rwm
lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
lxc.cgroup.devices.allow = b 7:* rwm# loop*
lxc.cgroup.memory.limit_in_bytes = 2536870910
lxc.mount.auto = cgroup

lxc.utsname = masterfe


#lxc.network.type = veth
#lxc.network.flags = up
#lxc.network.link = br20
#lxc.network.hwaddr = 92:ea:2b:24:e0:27
#lxc.network.ipv4 = 0.0.0.0/24

#lxc.network.type=macvlan
#lxc.network.macvlan.mode=bridge
#lxc.network.link=virbr0
#lxc.network.flags=up
#lxc.network.hwaddr =62:AA:10:C1:C3:CB

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth0
lxc.network.flags=up
lxc.network.hwaddr = 00:D3:47:7B:0D:D3
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth1
lxc.network.flags=up
lxc.network.hwaddr = 00:A8:D1:3F:2A:14
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth2
lxc.network.flags=up
lxc.network.hwaddr = 00:B3:AB:6F:85:8C
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth3
lxc.network.flags=up
lxc.network.hwaddr = 00:38:5C:71:B4:95
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth4
lxc.network.flags=up
lxc.network.hwaddr = 00:47:46:36:97:BB
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth5
lxc.network.flags=up
lxc.network.hwaddr = 00:3C:37:EF:32:43
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth6
lxc.network.flags=up
lxc.network.hwaddr = 00:2E:CE:67:A2:D7
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth7
lxc.network.flags=up
lxc.network.hwaddr = 00:86:29:AF:11:65
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth8
lxc.network.flags=up
lxc.network.hwaddr = 00:32:CC:25:22:E1
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth9
lxc.network.flags=up
lxc.network.hwaddr = 00:9C:BC:55:FE:51
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth10
lxc.network.flags=up
lxc.network.hwaddr = 00:D4:67:B2:7C:CC
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth11
lxc.network.flags=up
lxc.network.hwaddr = 00:F9:44:1E:B6:D1
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth12
lxc.network.flags=up
lxc.network.hwaddr = 00:01:FE:B5:D9:34
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth13
lxc.network.flags=up
lxc.network.hwaddr = 00:50:2B:17:FA:5E
lxc.network.ipv4 = 0.0.0.0/25

lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth4
lxc.network.name = eth14
lxc.network.flags=up
lxc.network.hwaddr = 00:80:A4:2C:78:63

Re: [lxc-users] Fedora 20 template on LVM not working

2014-05-15 Thread CDR
The template worked fine for me, but there is a material difference, I
use a simple directory, where you are using a different approach to
storage.
Philip

On Thu, May 15, 2014 at 12:01 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Thu, 2014-05-08 at 08:54 +0200, Flo wrote:
 Hi,


 is the Fedora template currently broken? I tried to create a LVM based
 Fedora 20 container but it looks like the template gets confused with
 lvm...


 lxc-create -n ipa01-test -B lvm --vgname=lxc1 --fssize=30G --thinpool
 thin_pool -t fedora -- --release 20

 Have you had a chance to try Serge's suggested patch to instrument the
 template and tell us what it's doing?

 Regards,
 Mike


 Complete!
 Fixing up rpm databases
 Download complete.
 Copy /var/cache/lxc/fedora/x86_64/20/rootfs
 to /dev/lxc1/ipa01-test ...
 Copying rootfs to /dev/lxc1/ipa01-test ...mkdir: cannot create
 directory ‘/dev/lxc1/ipa01-test’: File exists
 rsync: ERROR: cannot stat destination /dev/lxc1/ipa01-test/: Not a
 directory (20)
 rsync error: errors selecting input/output files, dirs (code 3) at
 main.c(652) [Receiver=3.1.0]


 mkdir: cannot create directory ‘/dev/lxc1/ipa01-test’: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 132: /dev/lxc1/ipa01-test/selinux/enforce: Not a directory
 sed: can't read /dev/lxc1/ipa01-test/etc/pam.d/login: Not a directory
 sed: can't read /dev/lxc1/ipa01-test/etc/pam.d/sshd: Not a directory
 cp: failed to access ‘/dev/lxc1/ipa01-test/etc/localtime’: Not a
 directory
 /usr/share/lxc/templates/lxc-fedora: line
 218: /dev/lxc1/ipa01-test/etc/sysconfig/network-scripts/ifcfg-eth0:
 Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 229: /dev/lxc1/ipa01-test/etc/sysconfig/network: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 236: /dev/lxc1/ipa01-test/etc/hostname: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 240: /dev/lxc1/ipa01-test/etc/hosts: Not a directory
 mkdir: cannot create directory ‘/dev/lxc1/ipa01-test’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-jobs2-prodm/dev/null’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-jobs2-prodm/dev/zero’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-jobs2-prodm/dev/random’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-jobs2-prodm/dev/urandom’: Not a directory
 mkdir: cannot create directory ‘/dev/lxc1/ipa01-test/dev/pts’: Not a
 directory
 mkdir: cannot create directory ‘/dev/lxc1/ipa01-test/dev/shm’: Not a
 directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/tty’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/tty0’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/tty1’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/tty2’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/tty3’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/tty4’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/console’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/full’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/initctl’: Not a directory
 mknod: ‘/dev/lxc1/ipa01-test/dev/ptmx’: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 275: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 276: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 277: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 278: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 279: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 280: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 281: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 /usr/share/lxc/templates/lxc-fedora: line
 282: /dev/lxc1/ipa01-test/etc/securetty: Not a directory
 Storing root password in '/var/lib/lxc/ipa01-test/tmp_root_pass'
 chroot: cannot change root directory to /dev/lxc1/ipa01-test: Not a
 directory
 chroot: cannot change root directory to /dev/lxc1/ipa01-test: Not a
 directory
 installing fedora-release package
 mount: mount point /dev/lxc1/ipa01-test/dev is not a directory
 mount: mount point /dev/lxc1/ipa01-test/proc is not a directory
 cp: failed to access ‘/dev/lxc1/ipa01-test/etc/’: Not a directory
 chroot: cannot change root directory to /dev/lxc1/ipa01-test: Not a
 directory
 chroot: cannot change root directory to /dev/lxc1/ipa01-test: Not a
 directory
 chroot: cannot change root directory to /dev/lxc1/ipa01-test: Not a
 directory
 umount: /dev/lxc1/ipa01-test/proc: Not a directory
 umount: /dev/lxc1/ipa01-test/dev: Not a directory
 touch: cannot touch ‘/dev/lxc1/ipa01-test/etc/fstab’: Not a directory
 sed: can't read /dev/lxc1/ipa01-test/etc/sysconfig/init: Not a
 directory


 Container rootfs and config have been created.
 Edit the config file to check/enable networking setup.


 You have successfully built a Fedora container and cache.  This cache
 may
 be used to create future containers of various 

Re: [lxc-users] Fedora container thinks it is not running

2014-05-15 Thread CDR
I don´t have the command
lxc-status

Should I have it?

On Thu, May 15, 2014 at 11:31 AM, Serge Hallyn serge.hal...@ubuntu.com wrote:
 Quoting Michael H. Warfield (m...@wittsend.com):
 On Thu, 2014-05-15 at 22:04 +0700, Fajar A. Nugraha wrote:
  On Thu, May 15, 2014 at 9:06 PM, Michael H. Warfield
  m...@wittsend.com wrote:
  On Thu, 2014-05-15 at 04:40 -0400, CDR wrote:
 
   The container is started, because I am inside it via ssh
   but I cannot use its console
   lxc-console -n msterfe
   msterfe is not running
  
   I am uploading the configuration as an attachment
   The container was created from the template, LXC 1.0.3
 
 
  Ah, but that config obviously was not.  (And totally aside,
  why do you
  need 20 macvlan eth interfaces in a container???)  What
  happened to the
  config that the template created?  Was it thrown away and a
  new one
  created from whole cloth?
 
  Did you first try the container with the initial configuration
  file
  generated by the template?  That would be a good place to
  start and you
  might want to check /usr/share/lxc/config/fedora.common.conf.
   The
  initial configuration file generated by the template will
  include that
  common set of parameters but you can override those defaults.
 
 
 
  The default default config file created by the template on Ubuntu
  should work, as long as you remember to uncomment this line:
 
  
  # When using LXC with apparmor, uncomment the next line to run
  unconfined:
 
  #lxc.aa_profile = unconfined
  
 
 
 
  With that commented out, you'd get
  
 
  30systemd[1]: Starting Root Slice.
 
  27systemd[1]: Caught SEGV, dumped core as pid 12.
  30systemd[1]: Freezing execution.
  
 
 
  With the unconfied apparmor profile, it works as expected
 
 
  
  # lxc-ls -f f20
  NAME  STATEIPV4IPV6  AUTOSTART
  --
  f20   RUNNING  10.0.3.205  - NO
  

 Nice catch!  I wonder if there is some way I can automate that in the

 What exactly is systemd doing at that spot?  (I suppose I shoudl go look
 at git, but figure maybe you know offhand)  Perhaps it's something we can
 add unconditionally to the apparmor profile.

 template.  I would hate to say if on Ubuntu but maybe with apparmor.
 Maybe that should be the default in that config and just ignored where
 apparmor isn't used.

  lxc-stop doesn't work without -k. I remember reading about this on
  the list some time ago, might be useful to integrate the workaround in
  the template.
  
  [root@f20 ~]# Received SIGPWR.
  

 I already integrated some thing in there.  Should no longer be a problem
 though that update may not have made it into a release yet.

  Using 20 veth interfaces in the container works, by adding blocks like
  this in the config file (and adding appropriate configuration inside
  the container). Each veth needs is its own unique hwaddr
  ###
  lxc.network.type = veth
  lxc.network.flags = up
  lxc.network.link = lxcbr0
 
  lxc.network.hwaddr = fe:8b:ee:bc:52:c0
  ###
 
 
 
 
 
  ###
  # lxc-ls -f f20
  NAME  STATEIPV4
 
 
   IPV6  AUTOSTART
  --
  f20   RUNNING  10.0.3.205, 10.0.3.207, 10.0.3.208, 10.0.3.209,
  10.0.3.210, 10.0.3.217, 10.0.3.218, 10.0.3.219, 10.0.3.220,
  10.0.3.221, 10.0.3.222, 10.0.3.223, 10.0.3.224, 10.0.3.225,
  10.0.3.226, 10.0.3.233, 10.0.3.234, 10.0.3.235, 10.0.3.236,
  10.0.3.237  - NO
  ###
 
 
 
  --
 
  Fajar

 Regards,
 Mike
 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!




 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] cloning could be improved

2014-05-15 Thread CDR
The cloning app should intelligently change the mount points to match
the new container's directory.
For example, this is the original mount
lxc.mount.entry = /usr/src /var/lib/lxc/container-35/rootfs/usr/src
none bind 0 0
if I clone  container-35, to container-36, the clone should have a
mount like this
lxc.mount.entry = /usr/src /var/lib/lxc/container-36/rootfs/usr/src
none bind 0 0

instead, now I have to manually fix the mounts. This is unnecessary
because the mounts will only work after the adjustment.


Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Hotplug new network interfaces not working

2014-05-13 Thread CDR
Dear Friends
I have a Fedora 20 LXC (libirt) container in production and I cannot reboot it.
So I used virsh edit mycontainer and added several

interface type='direct'
  mac address='00:5A:0C:18:C9:E9'/
  source dev='eth1' mode='bridge'/
/interface

The problem is that after it gets saved, the new interfaces never show
up in ip link, and I have no idea how to make Fedora under LXC to
check for the new hardware.
Is this a limitation of LXC in general?
I bet there is a workaround.

Yours
Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Hotplug new network interfaces not working

2014-05-13 Thread CDR
Let me digest all this.
You must be right, because Fedora 20 containers are the only ones that
use systemd, in my box

many thanks

Federico

On Tue, May 13, 2014 at 1:24 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-05-13 at 13:00 -0400, CDR wrote:
 I am forced to use libvirt-lxc only because nobody can help me solve
 this issue. If somebody can figure this out, then I will only use lXC.
 In a Fedora20 container, if I start it with the -d flag, then it
 never let's me enter the container via the console, it times out on me
 when I use
 lxc-console -n mycontainer.

 I guess the problem is, here, that this doesn't make sense (to me).  I
 work and develop under Fedora (17,18,19,20,rawhide).  I have containers
 of virtually every guest distro, even Gentoo and SUSE and a few others
 (NST, Kali) which are not supported.  I've got Fedora20 containers of
 both x86_64 and i686 archs.  I'm not seeing this problem.  I've started
 them without the -d as well as with the -d and currently start them
 using lxc-autostart and I'm just not seeing this problem.

 The problem is in trying to decide why your setup is (or your containers
 are) different or what you are doing different so we might address it
 (either in code or documentation).

 These are Fedora20 containers you are trying to start?  Under what rev
 where they created (not what you're running now)?  I know I had to add
 some code to the Fedora template to create the console devices...

 Look here:

 /var/lib/lxc/{container}/rootfs/etc/systemd/system/getty.target.wants

 Now look for a series of symlinks like this:

 [root@hydra getty.target.wants]# pwd
 /var/lib/lxc/Fedora20/rootfs/etc/systemd/system/getty.target.wants
 [root@hydra getty.target.wants]# ls -l
 total 0
 lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty1.service - 
 ../getty@.service
 lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty2.service - 
 ../getty@.service
 lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty3.service - 
 ../getty@.service
 lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty4.service - 
 ../getty@.service

 If you don't have them, that's your problem.  The lxc-fedora template
 now creates these automatically.  This is systemd stuff.  The
 pre-systemd Fedora template did not do this.  If you want more than
 four, you're going to have to create them yourself and add the changes
 to the config file for the vty's.

 If you don't have them, you probably DON'T have the properly munged
 getty@.service file in the parent directory, so you can't just copy the
 systemd default or blindly create them

 This change has to be made!

 [root@hydra rootfs]# diff lib/systemd/system/getty@.service 
 etc/systemd/system/getty@.service
 24c24
  ConditionPathExists=/dev/tty0
 ---
 # ConditionPathExists=/dev/tty0

 If you don't have that...  That's your problem.

 This happens only in Fedora 20 containers. But that does not happen
 under libvirt. I have 3  different OS' containers, and the other two
 work fine under pure LXC.

 It's not just under Fedora 20 containers but will impact any containers
 running systemd in a container, which has proven to be a black art for
 me.

 Regards,
 Mike

 I think that the bug is in LXC. If I ask Libvirt, they will respond
 it does work, doesn't it?
 Any idea?
 Philip

 On Tue, May 13, 2014 at 12:35 PM, Stéphane Graber stgra...@ubuntu.com 
 wrote:
  On Tue, May 13, 2014 at 12:29:07PM -0400, Michael H. Warfield wrote:
  On Tue, 2014-05-13 at 12:09 -0400, CDR wrote:
   Dear Friends
   I have a Fedora 20 LXC (libirt) container in production and I cannot 
   reboot it.
   So I used virsh edit mycontainer and added several
  
   interface type='direct'
 mac address='00:5A:0C:18:C9:E9'/
 source dev='eth1' mode='bridge'/
   /interface
 
  Ok...  But that's libvirt LXC, not LXC-Tools LXC.
 
   The problem is that after it gets saved, the new interfaces never show
   up in ip link, and I have no idea how to make Fedora under LXC to
   check for the new hardware.
 
   Is this a limitation of LXC in general?
 
  The term LXC in this case is ambiguous.  Are you talking about libvirt
  lxc or this project.  They are not the same.
 
   I bet there is a workaround.
 
  Only if you're skilled at creating hotplug and udev rules.  It has to be
  done under the host and the host has to transfer that device into the
  container.  It's not something that's really controllable from the
  container per se.  I can run devices and such in and out of LXC
  containers (NOT libvirt) with some scripting and some rules in the host,
  but I doubt that would help you (I take advantage of some of the
  devtmpfs stuff I wrote for this project).  Libvirt may have a way to
  work around that but you'll have to consult with them.
 
  I think Stéphane also had some utility for moving devices (interfaces,
  I'm not so sure) but, again, that's for this project, not libvirt.
 
  Correct, lxc-device will let you do that with our LXC.
 
 
   Yours
   Philip

Re: [lxc-users] Hotplug new network interfaces not working

2014-05-13 Thread CDR
I decided to generate  a new container and reinstall all my apps.
A lot of work, but you successfully demolished all my work so far, for
which I am thankful.

Philip


On Tue, May 13, 2014 at 7:57 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-05-13 at 18:13 -0400, CDR wrote:
 Dear Mike
 You are right, I only see one line.

 ls rootfs/etc/systemd/system/getty.target.wants -l
 total 0
 lrwxrwxrwx 1 root root 38 Apr 30 11:16 getty@tty1.service -
 /usr/lib/systemd/system/getty@.service

 Ok...

 You wouldn't have the script to fix this around?

 I think Fajar pointed you to it.  It's in the template.

 I created my Fedora 20 containers from within a Fedora box, like this:

 #!/bin/sh
 source /etc/profile
 container=nat-1
 mkdir -p /var/lib/lxc/${container}/rootfs
 yum -y --releasever=20 --nogpg --installroot=/var/lib/lxc/$container/rootfs \
   install systemd autofs passwd yum fedora-release vim-minimal
 openssh-server openssh-clients gcc autogen automake\
 subversion procps-ng initscripts net-tools ethtool nano dhcp dhclient
 lsof bind-utils psmisc bash-completion policycoreutils\
 libvirt libcap-devel lxc deltarpm bridge-utils strace git rpm-build
 docbook2X graphviz man netstat-nat
 cp /usr/src//ifup-local /var/lib/lxc/$container/rootfs/sbin/
 cp /etc/sysconfig/network /var/lib/lxc/$container/rootfs/etc/sysconfig
 cp /usr/src/ifcfg-eth0
 /var/lib/lxc/$container/rootfs/etc/sysconfig/network-scripts/ifcfg-eth0
 cp /usr/src/ifcfg-eth1
 /var/lib/lxc/$container/rootfs/etc/sysconfig/network-scripts/ifcfg-eth1
 echo pts/0  /var/lib/lxc/$container/rootfs/etc/securetty
 cp /usr/src/config /var/lib/lxc/$container/
 cp /usr/src/lxc*.rpm /var/lib/lxc/$container/rootfs/usr/src
 touch  /var/lib/lxc/$container/rootfs/etc/resolv.conf
 chroot /var/lib/lxc/$container/rootfs /bin/passwd root

 Oh, that explains a LOT and tells me you will have much bigger problems.
 These templates are designed to not only copy the distros into root file
 systems but to fine tune some of the peculiarities of running in a
 container, which can be (definitely IS) dependent on the startup init
 process.  You should read through some of these template scripts and
 understand, very thoroughly, how and why (I DO try and comment my
 templates about WHY I'm doing something that's non-intuitive) they are
 doing what they're doing before you attempt to create a script like
 this.

 Yours

 Federico

 Regards,
 Mike

 On Tue, May 13, 2014 at 1:42 PM, CDR vene...@gmail.com wrote:
  Let me digest all this.
  You must be right, because Fedora 20 containers are the only ones that
  use systemd, in my box
 
  many thanks
 
  Federico
 
  On Tue, May 13, 2014 at 1:24 PM, Michael H. Warfield m...@wittsend.com 
  wrote:
  On Tue, 2014-05-13 at 13:00 -0400, CDR wrote:
  I am forced to use libvirt-lxc only because nobody can help me solve
  this issue. If somebody can figure this out, then I will only use lXC.
  In a Fedora20 container, if I start it with the -d flag, then it
  never let's me enter the container via the console, it times out on me
  when I use
  lxc-console -n mycontainer.
 
  I guess the problem is, here, that this doesn't make sense (to me).  I
  work and develop under Fedora (17,18,19,20,rawhide).  I have containers
  of virtually every guest distro, even Gentoo and SUSE and a few others
  (NST, Kali) which are not supported.  I've got Fedora20 containers of
  both x86_64 and i686 archs.  I'm not seeing this problem.  I've started
  them without the -d as well as with the -d and currently start them
  using lxc-autostart and I'm just not seeing this problem.
 
  The problem is in trying to decide why your setup is (or your containers
  are) different or what you are doing different so we might address it
  (either in code or documentation).
 
  These are Fedora20 containers you are trying to start?  Under what rev
  where they created (not what you're running now)?  I know I had to add
  some code to the Fedora template to create the console devices...
 
  Look here:
 
  /var/lib/lxc/{container}/rootfs/etc/systemd/system/getty.target.wants
 
  Now look for a series of symlinks like this:
 
  [root@hydra getty.target.wants]# pwd
  /var/lib/lxc/Fedora20/rootfs/etc/systemd/system/getty.target.wants
  [root@hydra getty.target.wants]# ls -l
  total 0
  lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty1.service - 
  ../getty@.service
  lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty2.service - 
  ../getty@.service
  lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty3.service - 
  ../getty@.service
  lrwxrwxrwx. 1 root root 17 Mar  7 19:02 getty@tty4.service - 
  ../getty@.service
 
  If you don't have them, that's your problem.  The lxc-fedora template
  now creates these automatically.  This is systemd stuff.  The
  pre-systemd Fedora template did not do this.  If you want more than
  four, you're going to have to create them yourself and add the changes
  to the config file for the vty's.
 
  If you don't have them, you

Re: [lxc-users] Hotplug new network interfaces not working

2014-05-13 Thread CDR
I am trying to create a new container inside a Fedora 20 box, fully updated.
It blows up immediately.
I can give you access if you wish. Maybe we can make a better template.
Note: I am root.
uname -r
3.14.3-200.fc20.x86_64


lxc-create -t fedora20 -n masterfe
Host CPE ID from /etc/os-release: cpe:/o:fedoraproject:fedora:20
Checking cache download in @LOCALSTATEDIR@/cache/lxc/fedora/x86_64/20/rootfs ...
Downloading fedora minimal ...
Fetching rpm name from
http://mirror.pnl.gov/fedora/linux/releases/20/Everything/x86_64/os//Packages/f...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   288  100   2880 0892  0 --:--:-- --:--:-- --:--:--   897
  0 00  215k0 0   240k  0 --:--:-- --:--:-- --:--:-- 2340k
Fetching fedora release rpm from
http://mirror.pnl.gov/fedora/linux/releases/20/Everything/x86_64/os//Packages/f/fedora-release-20-1.noarch.rpm..
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 34036  100 340360 0   107k  0 --:--:-- --:--:-- --:--:--  107k
Bootstrap Environment testing...

OS fedora is whitelisted.  Installation Bootstrap Environment not required.

rpm: arguments to --root (-r) must begin with a /
sed: can't read
/@LOCALSTATEDIR@/cache/lxc/fedora/x86_64/20/partial/etc/yum.repos.d/*:
No such file or directory
CRITICAL:yum.cli:--installroot must be an absolute path:
@LOCALSTATEDIR@/cache/lxc/fedora/x86_64/20/partial
Failed to download the rootfs, aborting.
Failed to download 'fedora base'
failed to install fedora
lxc_container: container creation template for masterfe failed
lxc_container: Error creating container masterfe


On Tue, May 13, 2014 at 8:15 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-05-13 at 20:03 -0400, CDR wrote:
 I decided to generate  a new container and reinstall all my apps.
 A lot of work, but you successfully demolished all my work so far, for
 which I am thankful.

 Well, we don't mean to demolish others efforts but we have put a lot of
 work into these templates so others don't need to learn the the lessons
 we've learned and can avoid the sins we've committed.  Sorry if this has
 been more difficult than it needed to be.

 My deepest regards,
 Mike

 Philip

 On Tue, May 13, 2014 at 7:57 PM, Michael H. Warfield m...@wittsend.com 
 wrote:
  On Tue, 2014-05-13 at 18:13 -0400, CDR wrote:
  Dear Mike
  You are right, I only see one line.
 
  ls rootfs/etc/systemd/system/getty.target.wants -l
  total 0
  lrwxrwxrwx 1 root root 38 Apr 30 11:16 getty@tty1.service -
  /usr/lib/systemd/system/getty@.service
 
  Ok...
 
  You wouldn't have the script to fix this around?
 
  I think Fajar pointed you to it.  It's in the template.
 
  I created my Fedora 20 containers from within a Fedora box, like this:
 
  #!/bin/sh
  source /etc/profile
  container=nat-1
  mkdir -p /var/lib/lxc/${container}/rootfs
  yum -y --releasever=20 --nogpg 
  --installroot=/var/lib/lxc/$container/rootfs \
install systemd autofs passwd yum fedora-release vim-minimal
  openssh-server openssh-clients gcc autogen automake\
  subversion procps-ng initscripts net-tools ethtool nano dhcp dhclient
  lsof bind-utils psmisc bash-completion policycoreutils\
  libvirt libcap-devel lxc deltarpm bridge-utils strace git rpm-build
  docbook2X graphviz man netstat-nat
  cp /usr/src//ifup-local /var/lib/lxc/$container/rootfs/sbin/
  cp /etc/sysconfig/network /var/lib/lxc/$container/rootfs/etc/sysconfig
  cp /usr/src/ifcfg-eth0
  /var/lib/lxc/$container/rootfs/etc/sysconfig/network-scripts/ifcfg-eth0
  cp /usr/src/ifcfg-eth1
  /var/lib/lxc/$container/rootfs/etc/sysconfig/network-scripts/ifcfg-eth1
  echo pts/0  /var/lib/lxc/$container/rootfs/etc/securetty
  cp /usr/src/config /var/lib/lxc/$container/
  cp /usr/src/lxc*.rpm /var/lib/lxc/$container/rootfs/usr/src
  touch  /var/lib/lxc/$container/rootfs/etc/resolv.conf
  chroot /var/lib/lxc/$container/rootfs /bin/passwd root
 
  Oh, that explains a LOT and tells me you will have much bigger problems.
  These templates are designed to not only copy the distros into root file
  systems but to fine tune some of the peculiarities of running in a
  container, which can be (definitely IS) dependent on the startup init
  process.  You should read through some of these template scripts and
  understand, very thoroughly, how and why (I DO try and comment my
  templates about WHY I'm doing something that's non-intuitive) they are
  doing what they're doing before you attempt to create a script like
  this.
 
  Yours
 
  Federico
 
  Regards,
  Mike
 
  On Tue, May 13, 2014 at 1:42 PM, CDR vene...@gmail.com wrote:
   Let me digest all this.
   You must be right, because Fedora 20 containers are the only ones that
   use systemd, in my box
  
   many thanks
  
   Federico
  
   On Tue

Re: [lxc-users] Hotplug new network interfaces not working

2014-05-13 Thread CDR
I copied your file, copy-paste, and the file ended in
/usr/share/lxc/templates/lxc-fedora20
Then chmod +x
then lxc-create -t fedora20 -n masterfe

Am I wrong?



On Tue, May 13, 2014 at 8:39 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-05-13 at 20:30 -0400, CDR wrote:
 I am trying to create a new container inside a Fedora 20 box, fully updated.
 It blows up immediately.
 I can give you access if you wish. Maybe we can make a better template.
 Note: I am root.
 uname -r
 3.14.3-200.fc20.x86_64


 lxc-create -t fedora20 -n masterfe
 Host CPE ID from /etc/os-release: cpe:/o:fedoraproject:fedora:20
 Checking cache download in @LOCALSTATEDIR@/cache/lxc/fedora/x86_64/20/rootfs 
 ...

 Woa woa woa woa!

 Something is SERIOUSLY wrong here.

 You should NEVER see @LOCALSTATEDIR@/cache/lxc/...  That's something
 from the lxc.fedora.in file that gets processed by autoconf to create
 the correct paths.  That should NEVER been seen in a running template
 script.

 What did you do?  did you copy lxc.fedora.in to lxc.fedora somewhere?
 How did you create your LXC installation?  You've got some serious
 problems in your deployed installation if you are seeing any sort of
 message that says @@.

 Anything from here down is considered invalid and disregarded.

 Downloading fedora minimal ...
 Fetching rpm name from
 http://mirror.pnl.gov/fedora/linux/releases/20/Everything/x86_64/os//Packages/f...
   % Total% Received % Xferd  Average Speed   TimeTime Time  
 Current
  Dload  Upload   Total   SpentLeft  Speed
 100   288  100   2880 0892  0 --:--:-- --:--:-- --:--:--   
 897
   0 00  215k0 0   240k  0 --:--:-- --:--:-- --:--:-- 
 2340k
 Fetching fedora release rpm from
 http://mirror.pnl.gov/fedora/linux/releases/20/Everything/x86_64/os//Packages/f/fedora-release-20-1.noarch.rpm..
   % Total% Received % Xferd  Average Speed   TimeTime Time  
 Current
  Dload  Upload   Total   SpentLeft  Speed
 100 34036  100 340360 0   107k  0 --:--:-- --:--:-- --:--:--  
 107k
 Bootstrap Environment testing...

 OS fedora is whitelisted.  Installation Bootstrap Environment not required.

 rpm: arguments to --root (-r) must begin with a /
 sed: can't read
 /@LOCALSTATEDIR@/cache/lxc/fedora/x86_64/20/partial/etc/yum.repos.d/*:
 No such file or directory
 CRITICAL:yum.cli:--installroot must be an absolute path:
 @LOCALSTATEDIR@/cache/lxc/fedora/x86_64/20/partial
 Failed to download the rootfs, aborting.
 Failed to download 'fedora base'
 failed to install fedora
 lxc_container: container creation template for masterfe failed
 lxc_container: Error creating container masterfe


 On Tue, May 13, 2014 at 8:15 PM, Michael H. Warfield m...@wittsend.com 
 wrote:
  On Tue, 2014-05-13 at 20:03 -0400, CDR wrote:
  I decided to generate  a new container and reinstall all my apps.
  A lot of work, but you successfully demolished all my work so far, for
  which I am thankful.
 
  Well, we don't mean to demolish others efforts but we have put a lot of
  work into these templates so others don't need to learn the the lessons
  we've learned and can avoid the sins we've committed.  Sorry if this has
  been more difficult than it needed to be.
 
  My deepest regards,
  Mike
 
  Philip
 
  On Tue, May 13, 2014 at 7:57 PM, Michael H. Warfield m...@wittsend.com 
  wrote:
   On Tue, 2014-05-13 at 18:13 -0400, CDR wrote:
   Dear Mike
   You are right, I only see one line.
  
   ls rootfs/etc/systemd/system/getty.target.wants -l
   total 0
   lrwxrwxrwx 1 root root 38 Apr 30 11:16 getty@tty1.service -
   /usr/lib/systemd/system/getty@.service
  
   Ok...
  
   You wouldn't have the script to fix this around?
  
   I think Fajar pointed you to it.  It's in the template.
  
   I created my Fedora 20 containers from within a Fedora box, like this:
  
   #!/bin/sh
   source /etc/profile
   container=nat-1
   mkdir -p /var/lib/lxc/${container}/rootfs
   yum -y --releasever=20 --nogpg 
   --installroot=/var/lib/lxc/$container/rootfs \
 install systemd autofs passwd yum fedora-release vim-minimal
   openssh-server openssh-clients gcc autogen automake\
   subversion procps-ng initscripts net-tools ethtool nano dhcp dhclient
   lsof bind-utils psmisc bash-completion policycoreutils\
   libvirt libcap-devel lxc deltarpm bridge-utils strace git rpm-build
   docbook2X graphviz man netstat-nat
   cp /usr/src//ifup-local /var/lib/lxc/$container/rootfs/sbin/
   cp /etc/sysconfig/network /var/lib/lxc/$container/rootfs/etc/sysconfig
   cp /usr/src/ifcfg-eth0
   /var/lib/lxc/$container/rootfs/etc/sysconfig/network-scripts/ifcfg-eth0
   cp /usr/src/ifcfg-eth1
   /var/lib/lxc/$container/rootfs/etc/sysconfig/network-scripts/ifcfg-eth1
   echo pts/0  /var/lib/lxc/$container/rootfs/etc/securetty
   cp /usr/src/config /var/lib/lxc/$container/
   cp /usr/src/lxc*.rpm /var/lib/lxc/$container/rootfs/usr/src

[lxc-users] Question about Ubuntu Server

2014-05-11 Thread CDR
Dear Friends
I realize that this is not a list about Ubuntu, but here I have found
more knowledge and help that in any other place.
I am stuck since 48 hours ago in what takes 5 minutes in Fedora, i.e.,
setting vncserver to crate a desktop for me while the server is
headless, for it remains in text mode.
I am using Ubuntu Server LTS 14.04.
No mateer what I do, I always end up in a grey screen, but I cannot
see any desktop, being
gnome-session --session=gnome-fallback 
gnome-session --session=gnome-classic 
startxfce4 

I followed these instructions
https://www.digitalocean.com/community/articles/how-to-setup-vnc-for-ubuntu-12

but to no avail.

Is this even possible in this version of Ubuntu?
I like to use one or two graphic apps, live virt-manager and
gnome-system-monitor. Vnc gives me good results in Fedora. I cannot
see why the same setup cannot be replicated under Ubunto Server.

Many thanks and again, I apologize for asking a question no directly
related to the list.

Yours

Federico
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about Ubuntu Server

2014-05-11 Thread CDR
let me try that setup.
I am a few hours from installing Fedora 20, but, hey, I hate to give up.

On Sun, May 11, 2014 at 6:10 PM, Robert Pendell
shi...@elite-systems.org wrote:
 On Sun, May 11, 2014 at 5:58 PM, CDR vene...@gmail.com wrote:
 Dear Friends
 I realize that this is not a list about Ubuntu, but here I have found
 more knowledge and help that in any other place.
 I am stuck since 48 hours ago in what takes 5 minutes in Fedora, i.e.,
 setting vncserver to crate a desktop for me while the server is
 headless, for it remains in text mode.
 I am using Ubuntu Server LTS 14.04.
 No mateer what I do, I always end up in a grey screen, but I cannot
 see any desktop, being
 gnome-session --session=gnome-fallback 
 gnome-session --session=gnome-classic 
 startxfce4 

 I followed these instructions
 https://www.digitalocean.com/community/articles/how-to-setup-vnc-for-ubuntu-12

 but to no avail.

 Is this even possible in this version of Ubuntu?
 I like to use one or two graphic apps, live virt-manager and
 gnome-system-monitor. Vnc gives me good results in Fedora. I cannot
 see why the same setup cannot be replicated under Ubunto Server.

 Many thanks and again, I apologize for asking a question no directly
 related to the list.


 Yea it isn't exactly on topic but I've been down that road with 12.04
 and 14.04...

 Your not gonna get a perfect setup and I actually tried xfce4 but to
 no avail.  It loaded but many things were glitchy.

 My goal before was a clean and lean setup (essentials only) so I ended
 up installing tightvncserver and lxde instead.

 apt-get install lxde --no-install-recommends
 apt-get install tightvncserver

 I also install autocutsel for clipboard sync over vnc.  This is my
 xstartup file (drop it in .vnc folder)

 #!/bin/sh

 xrdb $HOME/.Xresources
 xsetroot -solid grey
 #x-terminal-emulator -geometry 80x24+10+10 -ls -title $VNCDESKTOP Desktop 
 #x-window-manager 
 # Fix to make GNOME work
 export XKL_XMODMAP_DISABLE=1
 exec /usr/bin/autocutsel 
 /etc/X11/Xsession

 From there I just start vncserver when I need a desktop to work on.  I
 don't leave it running all the time and when it is up I have it
 tunneled over ssh (since I'm usually in there when I need the
 desktop).
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about Ubuntu Server

2014-05-11 Thread CDR
I found that VNC is several orders of magnitude more responsive than
any X11 forwarding.
Am I wrong?

On Sun, May 11, 2014 at 6:34 PM, Fajar A. Nugraha l...@fajar.net wrote:
 On Mon, May 12, 2014 at 5:26 AM, CDR vene...@gmail.com wrote:
 let me try that setup.
 I am a few hours from installing Fedora 20, but, hey, I hate to give up.

 If you simply need a GUI, the EASIEST method by far is to:
 - install xubuntu-desktop on the server
 - install x2go server on the server, and x2go client on your
 workstation (https://www.google.com/search?q=x2go)
 - connect with x2go client, selecting xfce as your desktop type

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about Ubuntu Server

2014-05-11 Thread CDR
Dear Robert
I did it as vnc and it works. But when I start the vncserver as
root, I get the classic white/gray screen.
I want to be root because I need to use virt-manager and it only can
be root, plus I am in a secure environment.
I copied your xstartup to my /root/.vnc/
Any idea how can root start the desktop?

Yours
Philip

On Sun, May 11, 2014 at 6:10 PM, Robert Pendell
shi...@elite-systems.org wrote:
 On Sun, May 11, 2014 at 5:58 PM, CDR vene...@gmail.com wrote:
 Dear Friends
 I realize that this is not a list about Ubuntu, but here I have found
 more knowledge and help that in any other place.
 I am stuck since 48 hours ago in what takes 5 minutes in Fedora, i.e.,
 setting vncserver to crate a desktop for me while the server is
 headless, for it remains in text mode.
 I am using Ubuntu Server LTS 14.04.
 No mateer what I do, I always end up in a grey screen, but I cannot
 see any desktop, being
 gnome-session --session=gnome-fallback 
 gnome-session --session=gnome-classic 
 startxfce4 

 I followed these instructions
 https://www.digitalocean.com/community/articles/how-to-setup-vnc-for-ubuntu-12

 but to no avail.

 Is this even possible in this version of Ubuntu?
 I like to use one or two graphic apps, live virt-manager and
 gnome-system-monitor. Vnc gives me good results in Fedora. I cannot
 see why the same setup cannot be replicated under Ubunto Server.

 Many thanks and again, I apologize for asking a question no directly
 related to the list.


 Yea it isn't exactly on topic but I've been down that road with 12.04
 and 14.04...

 Your not gonna get a perfect setup and I actually tried xfce4 but to
 no avail.  It loaded but many things were glitchy.

 My goal before was a clean and lean setup (essentials only) so I ended
 up installing tightvncserver and lxde instead.

 apt-get install lxde --no-install-recommends
 apt-get install tightvncserver

 I also install autocutsel for clipboard sync over vnc.  This is my
 xstartup file (drop it in .vnc folder)

 #!/bin/sh

 xrdb $HOME/.Xresources
 xsetroot -solid grey
 #x-terminal-emulator -geometry 80x24+10+10 -ls -title $VNCDESKTOP Desktop 
 #x-window-manager 
 # Fix to make GNOME work
 export XKL_XMODMAP_DISABLE=1
 exec /usr/bin/autocutsel 
 /etc/X11/Xsession

 From there I just start vncserver when I need a desktop to work on.  I
 don't leave it running all the time and when it is up I have it
 tunneled over ssh (since I'm usually in there when I need the
 desktop).
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about Ubuntu Server

2014-05-11 Thread CDR
I just tried and they cannot.
In other distributions, there a file under /etc/X11/ where you need to add
AllowRemoteRoot=true
But I cannot find it in Ubuntu.
Has anybody ever used virt-manager from a vnc-client
I am sure there is a workaround.

Yours

Philip

On Sun, May 11, 2014 at 7:30 PM, jjs - mainphrame j...@mainphrame.com wrote:
 I'm pretty sure non-root users can use virt-manager as long as they are in
 the kvm group


 On Sun, May 11, 2014 at 4:20 PM, CDR vene...@gmail.com wrote:

 Dear Robert
 I did it as vnc and it works. But when I start the vncserver as
 root, I get the classic white/gray screen.
 I want to be root because I need to use virt-manager and it only can
 be root, plus I am in a secure environment.
 I copied your xstartup to my /root/.vnc/
 Any idea how can root start the desktop?

 Yours
 Philip

 On Sun, May 11, 2014 at 6:10 PM, Robert Pendell
 shi...@elite-systems.org wrote:
  On Sun, May 11, 2014 at 5:58 PM, CDR vene...@gmail.com wrote:
  Dear Friends
  I realize that this is not a list about Ubuntu, but here I have found
  more knowledge and help that in any other place.
  I am stuck since 48 hours ago in what takes 5 minutes in Fedora, i.e.,
  setting vncserver to crate a desktop for me while the server is
  headless, for it remains in text mode.
  I am using Ubuntu Server LTS 14.04.
  No mateer what I do, I always end up in a grey screen, but I cannot
  see any desktop, being
  gnome-session --session=gnome-fallback 
  gnome-session --session=gnome-classic 
  startxfce4 
 
  I followed these instructions
 
  https://www.digitalocean.com/community/articles/how-to-setup-vnc-for-ubuntu-12
 
  but to no avail.
 
  Is this even possible in this version of Ubuntu?
  I like to use one or two graphic apps, live virt-manager and
  gnome-system-monitor. Vnc gives me good results in Fedora. I cannot
  see why the same setup cannot be replicated under Ubunto Server.
 
  Many thanks and again, I apologize for asking a question no directly
  related to the list.
 
 
  Yea it isn't exactly on topic but I've been down that road with 12.04
  and 14.04...
 
  Your not gonna get a perfect setup and I actually tried xfce4 but to
  no avail.  It loaded but many things were glitchy.
 
  My goal before was a clean and lean setup (essentials only) so I ended
  up installing tightvncserver and lxde instead.
 
  apt-get install lxde --no-install-recommends
  apt-get install tightvncserver
 
  I also install autocutsel for clipboard sync over vnc.  This is my
  xstartup file (drop it in .vnc folder)
 
  #!/bin/sh
 
  xrdb $HOME/.Xresources
  xsetroot -solid grey
  #x-terminal-emulator -geometry 80x24+10+10 -ls -title $VNCDESKTOP
  Desktop 
  #x-window-manager 
  # Fix to make GNOME work
  export XKL_XMODMAP_DISABLE=1
  exec /usr/bin/autocutsel 
  /etc/X11/Xsession
 
  From there I just start vncserver when I need a desktop to work on.  I
  don't leave it running all the time and when it is up I have it
  tunneled over ssh (since I'm usually in there when I need the
  desktop).
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about Ubuntu Server

2014-05-11 Thread CDR
I did not know that there was a virt-manager for windows.
Let me research that
By the way, I solved it by assigning manually the root GUID and Group
ID of root the user vnc.
Now everything works.

Yours

Federico

On Sun, May 11, 2014 at 8:57 PM, Fajar A. Nugraha l...@fajar.net wrote:
 On Mon, May 12, 2014 at 7:04 AM, CDR vene...@gmail.com wrote:
 I just tried and they cannot.
 In other distributions, there a file under /etc/X11/ where you need to add
 AllowRemoteRoot=true
 But I cannot find it in Ubuntu.
 Has anybody ever used virt-manager from a vnc-client
 I am sure there is a workaround.

 This is one of those cases where you can either insist to do it
 EXACTLY like you wanted (in this case, via vnc client), or try the
 many other suggestions that have been
 tested-to-work-by-others-and-easy-to-setup. The suggestions so far was
 x2go, or an .xstartup that would start xterm and a window manager.

 There's also a much easier method not mentioned eaerlier, which would
 work just fine if just want to use virt-manager: ssh X forwarding.

 Also, have you read
 http://docs.fedoraproject.org/en-US/Fedora/18/html/Virtualization_Administration_Guide/chap-Virtualization_Administration_Guide-Remote_management_of_virtualized_guests.html
 ? IIRC you should be able to run virt-manager on any workstation, and
 have it connect-and-handle-all-the-console-display-forwarding via ssh.

 --
 Fajar
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Memory limit fails in Ubuntu Server

2014-05-09 Thread CDR
lxc-start -n utel-kde

lxc-start: call to cgmanager_set_value_sync failed: invalid request
lxc-start: Error setting cgroup memory.limit_in_bytes limit lxc/utel-kde
lxc-start: Error setting memory.limit_in_bytes to 5GB for utel-kde
lxc-start: failed to setup the cgroup limits for 'utel-kde'
lxc-start: failed to spawn 'utel-kde'

the line in my config file is

lxc.cgroup.memory.limit_in_bytes = 536870910

Yours
Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Memory limit fails in Ubuntu Server

2014-05-09 Thread CDR
I found the issue
This line
lxc.cgroup.memory.limit_in_bytes = 5G

makes the container fail to start
while this one works
lxc.cgroup.memory.limit_in_bytes = 536870910

I think that we should be free to specify the memory in any format, be
MB, G, etc.

Philip

On Fri, May 9, 2014 at 6:00 AM, CDR vene...@gmail.com wrote:
 lxc-start -n utel-kde

 lxc-start: call to cgmanager_set_value_sync failed: invalid request
 lxc-start: Error setting cgroup memory.limit_in_bytes limit lxc/utel-kde
 lxc-start: Error setting memory.limit_in_bytes to 5GB for utel-kde
 lxc-start: failed to setup the cgroup limits for 'utel-kde'
 lxc-start: failed to spawn 'utel-kde'

 the line in my config file is

 lxc.cgroup.memory.limit_in_bytes = 536870910

 Yours
 Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Memory limit fails in Ubuntu Server

2014-05-09 Thread CDR
I just tested and in fact, the memory restriction does not work.
a) set a 5G limit for the container
b) started the container
c) gave 16 G memory to mysql
D) restarted mysql
it works fine and it also shows the memory on show variable like '%buffer%'

Any idea how to work around this? I am afraid any container can take
down the box.

Philip

On Fri, May 9, 2014 at 6:09 AM, CDR vene...@gmail.com wrote:
 I found the issue
 This line
 lxc.cgroup.memory.limit_in_bytes = 5G

 makes the container fail to start
 while this one works
 lxc.cgroup.memory.limit_in_bytes = 536870910

 I think that we should be free to specify the memory in any format, be
 MB, G, etc.

 Philip

 On Fri, May 9, 2014 at 6:00 AM, CDR vene...@gmail.com wrote:
 lxc-start -n utel-kde

 lxc-start: call to cgmanager_set_value_sync failed: invalid request
 lxc-start: Error setting cgroup memory.limit_in_bytes limit lxc/utel-kde
 lxc-start: Error setting memory.limit_in_bytes to 5GB for utel-kde
 lxc-start: failed to setup the cgroup limits for 'utel-kde'
 lxc-start: failed to spawn 'utel-kde'

 the line in my config file is

 lxc.cgroup.memory.limit_in_bytes = 536870910

 Yours
 Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is it possible to change memory limits without restarting container?

2014-05-09 Thread CDR
I just tested and in fact, the memory restriction does not work.
a) set a 5G limit for the container
b) started the container
c) gave 16 G memory to mysql
D) restarted mysql
it works fine and it also shows the memory on show variable like '%buffer%'

Any idea how to work around this? I am afraid any container can take
down the box.

Philip

On Fri, May 9, 2014 at 5:58 AM, CDR vene...@gmail.com wrote:
 The memory limit falis in Ubuntu Server

 lxc-start -n utel-kde
 lxc-start: call to cgmanager_set_value_sync failed: invalid request
 lxc-start: Error setting cgroup memory.limit_in_bytes limit lxc/utel-kde
 lxc-start: Error setting memory.limit_in_bytes to 5GB for utel-kde
 lxc-start: failed to setup the cgroup limits for 'utel-kde'
 lxc-start: failed to spawn 'utel-kde'


 Yours
 Philip

 On Fri, May 9, 2014 at 5:22 AM, Wojciech Arabczyk ara...@gmail.com wrote:
 This is a drawback of the procutils package, and nothing you can
 really do about it. Better explained here:

 http://fabiokung.com/2014/03/13/memory-inside-linux-containers/

 On 9 May 2014 11:13, CDR vene...@gmail.com wrote:
 Dear Serge
 I type inside the container
 free -g
 and it shows the memory of the whole box.
 Am sure that is not by design.
 The customer must not see that.

 Philip


 On Thu, May 8, 2014 at 11:04 PM, Serge Hallyn serge.hal...@ubuntu.com 
 wrote:
 Quoting CDR (vene...@gmail.com):
 Dear Friends
 I found an example for the config file
 lxc.cgroup.memory.limit_in_bytes = 5GB
 lxc.cgroup.memory.memsw.limit_in_bytes = 6G

 But the container still shows100% of the memoty.

 what do you mean by 'shows'?
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users



 --
 pozdrawiam
 Wojciech Arabczyk
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Console not accessible in Fedora 20 LXC containers

2014-05-09 Thread CDR
On any Fedora 20 LXC containers, not libvirt, if I start the container
without the -d flag, I can enter the container via the console just
fine,
However, if I start the container without the -d flag, I get stuck like this

 lxc-console -n fasterisk

Connected to tty 1
Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself

Note: the same  container when started via libvirt, is fine, I can
access the console after starting it, which actually is he only way
that Libvirt allows to do this.

I tried adding pts/0 to /etc/securetty, or removing t, but it has no effect.

Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Memory limit fails in Ubuntu Server

2014-05-09 Thread CDR
Sorry, where do I see that
memory.max_usage_in_bytes

for my container?

On Fri, May 9, 2014 at 9:21 AM, Serge Hallyn serge.hal...@ubuntu.com wrote:
 Quoting CDR (vene...@gmail.com):
 I just tested and in fact, the memory restriction does not work.
 a) set a 5G limit for the container
 b) started the container
 c) gave 16 G memory to mysql

 But did it actually fill up the memory?  What is memory.max_usage_in_bytes
 showing?

 D) restarted mysql
 it works fine and it also shows the memory on show variable like '%buffer%'

 Any idea how to work around this? I am afraid any container can take
 down the box.

 Philip

 On Fri, May 9, 2014 at 6:09 AM, CDR vene...@gmail.com wrote:
  I found the issue
  This line
  lxc.cgroup.memory.limit_in_bytes = 5G
 
  makes the container fail to start
  while this one works
  lxc.cgroup.memory.limit_in_bytes = 536870910
 
  I think that we should be free to specify the memory in any format, be
  MB, G, etc.
 
  Philip
 
  On Fri, May 9, 2014 at 6:00 AM, CDR vene...@gmail.com wrote:
  lxc-start -n utel-kde
 
  lxc-start: call to cgmanager_set_value_sync failed: invalid request
  lxc-start: Error setting cgroup memory.limit_in_bytes limit lxc/utel-kde
  lxc-start: Error setting memory.limit_in_bytes to 5GB for utel-kde
  lxc-start: failed to setup the cgroup limits for 'utel-kde'
  lxc-start: failed to spawn 'utel-kde'
 
  the line in my config file is
 
  lxc.cgroup.memory.limit_in_bytes = 536870910
 
  Yours
  Philip
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-console produces no output

2014-05-09 Thread CDR
I detected an issue with all Fedora 20 containers that may benefit
from --nohangup workaround
If I start the container without the -d flag, then I cannot enter
the container via the console, later.
If I start the container without the -d flag, it works, bu then I
cannot get out.

So how do I try this workaround? If you give me precise instructions I
will try it.
Note: I have containers of 3 types, Debian, SLES and Fedora, only
Fedora has the issue.

Philip

On Fri, May 9, 2014 at 9:56 AM, Dwight Engen dwight.en...@oracle.com wrote:
 On Fri, 9 May 2014 00:18:13 -0400
 Robert Pendell shi...@elite-systems.org wrote:

 On Fri, May 9, 2014 at 12:12 AM, Robert Pendell
 shi...@elite-systems.org wrote:
  On Thu, May 8, 2014 at 10:56 PM, Serge Hallyn
  serge.hal...@ubuntu.com wrote:
  Quoting Robert Pendell (shi...@elite-systems.org):
  OS: Ubuntu 14.04 LTS x86_64
  Kernel: Host-Supplied 3.14.1
  Provider: Linode
  Host Virtualization: Xen Paravirtualized
  LXC Version: 1.0.3-0ubuntu3
  Guest: CentOS 6 i386
 
  I know I've seen this issue crop up before however I don't know
  how it has been fixed and the solution may vary.
 
  I'm using a standard Centos template created using the download
  type. It is running in unprivileged mode and for some reason
  lxc-console isn't producing any output when I attempt to connect
  to it.
 
  Here is what I get:
  shinji@icarus:~$ sudo lxc-console -n gateone
 
  Connected to tty 1
  Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter
  Ctrl+a itself
 
  Before you run lxc-console, find the host pid of the mingetty on
  pts/1, and strace -f -p that one.
 
 
  Heh... I tried and it keeps telling me the process id doesn't exist
  then I noticed that the pid is changing so I went into the guest and
  looked at the messages log and it is being spammed with this...
 
  May  8 23:54:17 localhost init: tty (/dev/tty1) main process (2335)
  terminated with status 1
  May  8 23:54:17 localhost init: tty (/dev/tty1) main process ended,
  respawning May  8 23:54:17 localhost /sbin/mingetty[2348]: tty1:
  vhangup() failed
 
  *a few minutes later*
  Found the fix.  This might have to go into the prep script for
  CentOS as it seems to happen for others as well.  It seems the
  folks working on the Oracle image figured this out.
 
  https://github.com/lxc/lxc/blob/master/templates/lxc-oracle.in
  Line 368 comments about how vhangup fails with userns.  It probably
  is related to unprivileged containers only although I have not
  checked with privileged so don't quote me on that last bit.  All I
  know is it fixed the issue for me.  Others may gain usefulness in
  this.
 
  Basically I edited the /etc/init/tty.conf script and added
  --nohangup to the mingetty exec command in the script.
 
  It became this: exec /sbin/mingetty $TTY --nohangup
 
  This will propagate to /dev/console as well due to the way that it
  is being created during boot (it calls the tty init using
  TTY=console as a parameter).

 Adding to my last message:
 https://github.com/lxc/lxc/commit/2e83f7201c5d402478b9849f0a85c62d5b9f1589
 This is needed when using the user namespace since the kernel check
 does not allow user_ns root to successfully call vhangup(2), and
 mingetty will quit in this case.

 So it does appear to only affect unprivileged containers.

 Hi Robert, yes its only when using user_ns. I added this as a
 workaround because we never did figure out vhangup in a user_ns. See
 http://comments.gmane.org/gmane.linux.kernel.containers.lxc.devel/2632
 for the original discussion.

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] NUMA and LXC

2014-05-09 Thread CDR
I think we need to add a configuration to the global lxc.conf whereas
any given container may run only on one NUMA node, and if that is not
possible, it should not even start.

The performance for a container that is contained, so to speak, in  a
single NUMA node, should be much higher that a container that has its
processes all over the place.

I noticed this in Hyper-V virtual machines, where you may set this
restriction. The performance as measured by hdparm tT  --direct is
twice than a virtual machine started without this feature.

This is the distance between nodes, as reported by numactl --hardware

node distances:
node   0   1   2   3
  0:  10  20  20  20
  1:  20  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] NUMA and LXC

2014-05-09 Thread CDR
..and that goes in the container's config or in the master lxc config file?
So ask, I am still getting the handle on LXC

On Fri, May 9, 2014 at 12:46 PM, Serge Hallyn serge.hal...@ubuntu.com wrote:
 lxc.cgroup.cpuset.mems = 1

 Quoting CDR (vene...@gmail.com):
 I think we need to add a configuration to the global lxc.conf whereas
 any given container may run only on one NUMA node, and if that is not
 possible, it should not even start.

 The performance for a container that is contained, so to speak, in  a
 single NUMA node, should be much higher that a container that has its
 processes all over the place.

 I noticed this in Hyper-V virtual machines, where you may set this
 restriction. The performance as measured by hdparm tT  --direct is
 twice than a virtual machine started without this feature.

 This is the distance between nodes, as reported by numactl --hardware

 node distances:
 node   0   1   2   3
   0:  10  20  20  20
   1:  20  10  20  20
   2:  20  20  10  20
   3:  20  20  20  10
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu Server LXC networking Problem

2014-05-09 Thread CDR
No, my container has only one interface, and the host has many cards
but only one with an IP address, the rest are UP but are not
configured.
This scenario males policy routing unnecessary.


On Fri, May 9, 2014 at 10:51 AM, Michael H. Warfield m...@wittsend.com wrote:
 On Fri, 2014-05-09 at 08:00 -0400, CDR wrote:
 Does anybody know where in Canonical I may get support for LXC
 bridged-NAT networking?
 If the box is multihomed, it does not work.Although only one of the
 NICs has an IP address, it simply cannot route packets to the network.
 You may ping the default gateway, but that is it.

 The same happens in every other distribution, but I switched over to
 Ubuntu Server for a host.

 I seem to recall in another message you sent that you were not just
 using nat/bridge but also policy routing.  Is that still true.  If so,
 please be sure to mention that along with the bridge networking.  You've
 got a very unusual setup.

 Philip

 Regards,
 Mike
 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] NUMA and LXC

2014-05-09 Thread CDR
That seems to be a way to confine a container to a single NUMA node.
What Hyper-V does is guarantee that no virtual machine will ever use
memory outside the node, whatever node it was started on, and they
choose the nodes for us.
Does it make any sense o this is a dream?

On Fri, May 9, 2014 at 2:12 PM, Serge Hallyn serge.hal...@ubuntu.com wrote:
 That goes into the container config.  If you wanted all containers on
 the same node you could put it into /etc/lxc/default.conf before creating
 containers.

 Quoting CDR (vene...@gmail.com):
 ..and that goes in the container's config or in the master lxc config file?
 So ask, I am still getting the handle on LXC

 On Fri, May 9, 2014 at 12:46 PM, Serge Hallyn serge.hal...@ubuntu.com 
 wrote:
  lxc.cgroup.cpuset.mems = 1
 
  Quoting CDR (vene...@gmail.com):
  I think we need to add a configuration to the global lxc.conf whereas
  any given container may run only on one NUMA node, and if that is not
  possible, it should not even start.
 
  The performance for a container that is contained, so to speak, in  a
  single NUMA node, should be much higher that a container that has its
  processes all over the place.
 
  I noticed this in Hyper-V virtual machines, where you may set this
  restriction. The performance as measured by hdparm tT  --direct is
  twice than a virtual machine started without this feature.
 
  This is the distance between nodes, as reported by numactl --hardware
 
  node distances:
  node   0   1   2   3
0:  10  20  20  20
1:  20  10  20  20
2:  20  20  10  20
3:  20  20  20  10
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ubuntu Server LXC networking Problem

2014-05-09 Thread CDR
The issue happens only with libvirt's virbr0 default network.
It does not happen with lxc lxcbr0

I am attaching the required information

On Fri, May 9, 2014 at 2:03 PM, CDR vene...@gmail.com wrote:
 No, my container has only one interface, and the host has many cards
 but only one with an IP address, the rest are UP but are not
 configured.
 This scenario males policy routing unnecessary.


 On Fri, May 9, 2014 at 10:51 AM, Michael H. Warfield m...@wittsend.com 
 wrote:
 On Fri, 2014-05-09 at 08:00 -0400, CDR wrote:
 Does anybody know where in Canonical I may get support for LXC
 bridged-NAT networking?
 If the box is multihomed, it does not work.Although only one of the
 NICs has an IP address, it simply cannot route packets to the network.
 You may ping the default gateway, but that is it.

 The same happens in every other distribution, but I switched over to
 Ubuntu Server for a host.

 I seem to recall in another message you sent that you were not just
 using nat/bridge but also policy routing.  Is that still true.  If so,
 please be sure to mention that along with the bridge networking.  You've
 got a very unusual setup.

 Philip

 Regards,
 Mike
 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
'eth0: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 192.168.122.17  netmask 255.255.255.0  broadcast 192.168.122.255
ether 00:16:3b:d4:cd:ea  txqueuelen 1000  (Ethernet)
RX packets 750  bytes 41204 (40.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 19  bytes 3422 (3.3 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet   netmask 255.255.255.128  broadcast XXX
ether 00:16:3c:d3:cd:ea  txqueuelen 0  (Ethernet)
RX packets 31450  bytes 1909128 (1.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 16  bytes 2336 (2.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
ifconfig
eth0  Link encap:Ethernet  HWaddr c8:1f:66:dc:98:c0
  inet addr:X.156  Bcast:X.255  Mask:255.255.255.128
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:348168 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8916 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:23279848 (23.2 MB)  TX bytes:1313982 (1.3 MB)

eth1  Link encap:Ethernet  HWaddr c8:1f:66:dc:98:c2
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:777578 errors:0 dropped:48 overruns:0 frame:0
  TX packets:101 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:119205975 (119.2 MB)  TX bytes:15715 (15.7 KB)

eth2  Link encap:Ethernet  HWaddr c8:1f:66:dc:98:c4
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:332853 errors:0 dropped:0 overruns:0 frame:0
  TX packets:25 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:21860357 (21.8 MB)  TX bytes:4224 (4.2 KB)

eth3  Link encap:Ethernet  HWaddr c8:1f:66:dc:98:c6
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:330596 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:21714991 (21.7 MB)  TX bytes:0 (0.0 B)

eth4  Link encap:Ethernet  HWaddr 00:0a:f7:33:0f:08
  UP BROADCAST MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth5  Link encap:Ethernet  HWaddr 00:0a:f7:33:0f:0a
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:241 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:35186 (35.1 KB)  TX bytes:0 (0.0 B)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask

Re: [lxc-users] LXC Compilation fails under Suse Enterprise

2014-05-07 Thread CDR
Dear Thorsten
I did request to be included in the Beta , and i got this response
Thank you for your interest in the SUSE Linux Enterprise 12 beta
program. Due to the overwhelming response, our program is now full. We
will keep your request on file, and get back to you should some
additional space open up.

Thank you again for your interest.
SUSE Linux Enterprise Beta Team:

Can you help?

This email address is associated with the account.

Philip


On Wed, May 7, 2014 at 3:44 AM, Thorsten Behrens tbehr...@suse.com wrote:
 CDR wrote:
 I was under the impression that the LXC group could make this compile
 under every major distribution.
 Is there any way somebody from the LXC group can research this with
 the Suse guys? They surely
 will talk to you people, but not to a customer,  unless I pay support fees.
 I  imagine I need to install a newer autconf or automake

  rpm -qa | grep auto
 automake-1.10.1-4.131.9.1
 autoconf-2.63-1.158

 Hi Philip,

 for SLES, as with any enterprise Linux, either use the package
 provided by the platform (lxc-0.8.0 in your case), request an update
 (that usually has a price tag), or build it yourself. For the latter,
 Leonid has a sensible suggestion - surely you don't want to run git
 master in production?

 FWIW, if you were looking into fedora - maybe opensuse 13.1 is an
 option to consider. Stock lxc there is 0.9.0, with 1.0.3 slated to be
 in 13.2. ;)

 HTH,

 -- Thorsten

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] {Disarmed} [lxc-devel] CentOS 6.3 kernel-2.6.32-279.el6.x86_64 crash

2014-05-07 Thread CDR
I had to install kernel 3.14.2 in order to avoid crashes with LXC.



On Wed, May 7, 2014 at 12:41 PM, Shibashish shi...@gmail.com wrote:
 Upgraded lxc
 lxc-libs-1.0.3-1.el6.x86_64
 lxc-1.0.3-1.el6.x86_64

 CentOS release 6.3 (Final)

 uname -a
 Linux myhost 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012
 x86_64 x86_64 x86_64 GNU/Linux


 But the problem persists, have had couple of kernel panics.

 [ cut here ]
 kernel BUG at mm/slab.c:533!
 invalid opcode:  [#1] SMP
 last sysfs file: /sys/devices/virtual/dmi/id/sys_vendor
 CPU 0
 Modules linked in: veth bridge stp llc ipv6 e1000e(U) sg microcode i2c_i801
 iTCO_wdt iTCO_vendor_support shpchp i5000_edac edac_core i5k_amb ioatdma dca
 ext3 jbd mbcache sd_mod crc_t10dif aacraid pata_acpi ata_generic ata_piix
 radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core dm_mirror dm_region_hash
 dm_log dm_mod [last unloaded: scsi_wait_scan]

 Pid: 0, comm: swapper Tainted: G  I---
 2.6.32-279.el6.x86_64 #1 Supermicro X7DVL/X7DVL
 RIP: 0010:[81163f75]  [81163f75] free_block+0x165/0x170
 RSP: 0018:8800282032d0  EFLAGS: 00010046
 RAX: ea000a54e368 RBX: 88042fcf03c0 RCX: 0010
 RDX: 0040 RSI: 8802f3bb6d40 RDI: 8802f3aeb000
 RBP: 880028203320 R08: ea000e79b720 R09: 
 R10:  R11: 80042000 R12: 000c
 R13: 88042fea13a8 R14: 0002 R15: ea00
 FS:  () GS:88002820() knlGS:
 CS:  0010 DS: 0018 ES: 0018 CR0: 8005003b
 CR2: 7fc077681000 CR3: 00042216f000 CR4: 06f0
 DR0:  DR1:  DR2: 
 DR3:  DR6: 0ff0 DR7: 0400
 Process swapper (pid: 0, threadinfo 81a0, task 81a8d020)
 Stack:
  88042fc216c0 8802f3bb6d40 100c 8802f3aeb000
 d 880028203360 8802f3bc4000 88042fea1380 0286
 d 88042fcf03c0 88042fea1398 880028203390 81164500
 Call Trace:
  IRQ
  [81164500] kfree+0x310/0x320
  [8143c949] ? enqueue_to_backlog+0x179/0x210
  [8142fef8] skb_release_data+0xd8/0x110
  [8143c949] ? enqueue_to_backlog+0x179/0x210
  [8142fa2e] __kfree_skb+0x1e/0xa0
  [8142fb72] kfree_skb+0x42/0x90
  [8143c949] enqueue_to_backlog+0x179/0x210
  [8143fb20] netif_rx+0xb0/0x160
  [8143fe32] dev_forward_skb+0x122/0x180
  [a03446e6] veth_xmit+0x86/0xe0 [veth]
  [8143b0cc] dev_hard_start_xmit+0x2bc/0x3f0
  [81458c1a] sch_direct_xmit+0x15a/0x1c0
  [8143f878] dev_queue_xmit+0x4f8/0x6f0
  [a03276bc] br_dev_queue_push_xmit+0x6c/0xa0 [bridge]
  [a032d378] br_nf_dev_queue_xmit+0x28/0xa0 [bridge]
  [a032de10] br_nf_post_routing+0x1d0/0x280 [bridge]
  [814665e9] nf_iterate+0x69/0xb0
  [a0327650] ? br_dev_queue_push_xmit+0x0/0xa0 [bridge]
  [814667a4] nf_hook_slow+0x74/0x110
  [a0327650] ? br_dev_queue_push_xmit+0x0/0xa0 [bridge]
  [a03276f0] ? br_forward_finish+0x0/0x60 [bridge]
  [a0327733] br_forward_finish+0x43/0x60 [bridge]
  [a032d9b8] br_nf_forward_finish+0x128/0x140 [bridge]
  [a032eea8] ? br_nf_forward_ip+0x318/0x3c0 [bridge]
  [a032eea8] br_nf_forward_ip+0x318/0x3c0 [bridge]
  [814665e9] nf_iterate+0x69/0xb0
  [a03276f0] ? br_forward_finish+0x0/0x60 [bridge]
  [814667a4] nf_hook_slow+0x74/0x110
  [a03276f0] ? br_forward_finish+0x0/0x60 [bridge]
  [a0327750] ? __br_forward+0x0/0xc0 [bridge]
  [a03277c2] __br_forward+0x72/0xc0 [bridge]
  [a0327601] br_flood+0xc1/0xd0 [bridge]
  [a0327625] br_flood_forward+0x15/0x20 [bridge]
  [a03287ae] br_handle_frame_finish+0x27e/0x2a0 [bridge]
  [a032e318] br_nf_pre_routing_finish+0x228/0x340 [bridge]
  [a032e88f] br_nf_pre_routing+0x45f/0x760 [bridge]
  [814665e9] nf_iterate+0x69/0xb0
  [a0328530] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
  [814667a4] nf_hook_slow+0x74/0x110
  [a0328530] ? br_handle_frame_finish+0x0/0x2a0 [bridge]
  [a032895c] br_handle_frame+0x18c/0x250 [bridge]
  [8143a839] __netif_receive_skb+0x519/0x6f0
  [8143ca38] netif_receive_skb+0x58/0x60
  [8143cbe4] napi_gro_complete+0x84/0xe0
  [8143ce0b] dev_gro_receive+0x1cb/0x290
  [8143cf4b] __napi_gro_receive+0x7b/0x170
  [8143f06f] napi_gro_receive+0x2f/0x50
  [a027233b] e1000_receive_skb+0x5b/0x90 [e1000e]
  [a0275601] e1000_clean_rx_irq+0x241/0x4c0 [e1000e]
  [a027cb8d] e1000e_poll+0x8d/0x380 [e1000e]
  [8143] ? process_backlog+0x9a/0x100
  [8143f193] net_rx_action+0x103/0x2f0
  [81073ec1] __do_softirq+0xc1/0x1e0
  [810db800] ? 

[lxc-users] Unavailable loop devices

2014-05-06 Thread CDR
Dear Friends

I succesfully created a SLES 11 SP3 container, but when I try to do this

mount -o loop /images/SLE-11-SP3-SDK-DVD-x86_64-GM-DVD1.iso /media

mount: Could not find any loop device. Maybe this kernel does not know
   about the loop device? (If so, recompile or `modprobe loop'.)

My host is Fedora 20 and the LXC version is

rpm -qa | grep lxc
libvirt-daemon-lxc-1.1.3.4-4.fc20.x86_64
libvirt-daemon-driver-lxc-1.1.3.4-4.fc20.x86_64
lxc-devel-1.0.0-1.fc20.x86_64
lxc-debuginfo-1.0.0-1.fc20.x86_64
lxc-libs-1.0.0-1.fc20.x86_64
lxc-1.0.0-1.fc20.x86_64

the configuration is:

lxc.start.auto = 0
lxc.start.delay = 5
lxc.start.order = 10

# When using LXC with apparmor, uncomment the next line to run unconfined:
#lxc.aa_profile = unconfined

lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm

# mounts point
lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults  0 0
lxc.mount.entry = /images  /var/lib/lxc/utel-kde/rootfs/images none bind 0 0


lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth1
lxc.network.flags=up
lxc.network.hwaddr = e2:91:a8:17:97:e4
lxc.network.ipv4 = 0.0.0.0/21


How do make the kernel loop module available for the container?

Yours
Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unavailable loop devices

2014-05-06 Thread CDR
Dear Mike
It does work indeed.
I suggest that the developers add these two lines to the sample configuration.
Yours
Philip

On Tue, May 6, 2014 at 9:28 AM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-05-06 at 06:25 -0400, CDR wrote:
 Dear Friends

 I succesfully created a SLES 11 SP3 container, but when I try to do this

 mount -o loop /images/SLE-11-SP3-SDK-DVD-x86_64-GM-DVD1.iso /media

 mount: Could not find any loop device. Maybe this kernel does not know
about the loop device? (If so, recompile or `modprobe loop'.)

 Add the following to your container configuration file:

 lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
 lxc.cgroup.devices.allow = b 7:* rwm# loop*

 Then make sure you have the following devices in your container /dev
 directory...

 brw-rw. 1 root disk  7,   0 May  2 13:03 /dev/loop0
 brw-rw. 1 root disk  7,   1 May  2 13:03 /dev/loop1
 brw-rw. 1 root disk  7,   2 May  2 13:03 /dev/loop2
 brw-rw. 1 root disk  7,   3 May  2 13:03 /dev/loop3
 crw---. 1 root root 10, 237 May  2 13:03 /dev/loop-control

 Regards,
 Mike

 My host is Fedora 20 and the LXC version is

 rpm -qa | grep lxc
 libvirt-daemon-lxc-1.1.3.4-4.fc20.x86_64
 libvirt-daemon-driver-lxc-1.1.3.4-4.fc20.x86_64
 lxc-devel-1.0.0-1.fc20.x86_64
 lxc-debuginfo-1.0.0-1.fc20.x86_64
 lxc-libs-1.0.0-1.fc20.x86_64
 lxc-1.0.0-1.fc20.x86_64

 the configuration is:

 lxc.start.auto = 0
 lxc.start.delay = 5
 lxc.start.order = 10

 # When using LXC with apparmor, uncomment the next line to run unconfined:
 #lxc.aa_profile = unconfined

 lxc.cgroup.devices.deny = a
 # /dev/null and zero
 lxc.cgroup.devices.allow = c 1:3 rwm
 lxc.cgroup.devices.allow = c 1:5 rwm
 # consoles
 lxc.cgroup.devices.allow = c 5:1 rwm
 lxc.cgroup.devices.allow = c 5:0 rwm
 lxc.cgroup.devices.allow = c 4:0 rwm
 lxc.cgroup.devices.allow = c 4:1 rwm
 # /dev/{,u}random
 lxc.cgroup.devices.allow = c 1:9 rwm
 lxc.cgroup.devices.allow = c 1:8 rwm
 lxc.cgroup.devices.allow = c 136:* rwm
 lxc.cgroup.devices.allow = c 5:2 rwm
 # rtc
 lxc.cgroup.devices.allow = c 254:0 rwm

 # mounts point
 lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
 lxc.mount.entry = sysfs sys sysfs defaults  0 0
 lxc.mount.entry = /images  /var/lib/lxc/utel-kde/rootfs/images none bind 0 0


 lxc.network.type=macvlan
 lxc.network.macvlan.mode=bridge
 lxc.network.link=eth1
 lxc.network.flags=up
 lxc.network.hwaddr = e2:91:a8:17:97:e4
 lxc.network.ipv4 = 0.0.0.0/21


 How do make the kernel loop module available for the container?

 Yours
 Philip
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Unavailable loop devices

2014-05-06 Thread CDR
Well, I just found real business case where your theory falls flat.
In a Suse Enterprise container, the only way to allow the owner of the
container to install new packages, is to mount permanently the
original ISO and the original SDK iso, otherwise zypper would not
work. The updates come from the internet, but new base packages you
need to fetch them from the ISO. I am not sure if zypper just mounts
and dismounts them on the sport and frees the loop device.
Suppose my customer clones 50 times a container, this would blow
through the available loop devices.

Yours

Philip

On Tue, May 6, 2014 at 11:06 AM, Michael H. Warfield m...@wittsend.com wrote:
 On Tue, 2014-05-06 at 10:33 -0400, CDR wrote:
 Dear Mike
 It does work indeed.
 I suggest that the developers add these two lines to the sample 
 configuration.

 It's been discussed and passed on for reasons for the time being.  The
 need for it in containers is relatively limited.

 There also are currently some isolation issues between containers with
 the loop devices.  i.e.  Running losetup -l currently dumps the
 information of all the loop devices system wide even if you are in a
 container.  I'm not sure at this point in time if you did a losetup -d
 on a loop device in a container, which had not setup the loop device,
 what would happen.  I hadn't previously tested that yet but...  It seems
 to fail silently as if it succeeded but doesn't really do anything.
 It's not clean.  In most cases, using losetup to automatically managed
 the appropriate loop device does the right thing and avoids collisions.

 Then there's the issue of the number of available loop devices.  Because
 they're shared, if one container consumes 3 and another container
 requires 2, the second one is going to fail in the default configuration
 (Default is 4 - I run with 64).

 I would personally advise only adding loop devices to those containers
 that absolutely need them.  I don't think they are appropriate as
 default devices at this time when most containers don't even need them.
 I would especially avoid them in cases where you may be hosting
 containers for others.  I have about a half a dozen groups of containers
 I'm hosting for other friends and relatives and business associates on a
 bit colocated server I run.  I wouldn't enable loop devices in any of
 those containers unless it was specifically requested and even then only
 for the duration of the need.  They know.  They've never asked.
 Certainly no need for that to be in a default configuration.

 Yes that limits the container owners ability to mount images but that's
 really not that common in practice outside of developers.

 Building containers within containers, you may also run into problems
 with certain package installs and builds having unusual requirements for
 capabilities (setfcap comes immediately to mind).  I run into this when
 I created containers to build NST (Network Security Toolkit) images, in
 addition to the expected loop device issues.  That's another thing that
 should only be enabled on those specific containers requiring it.

 Yours
 Philip

 Regards,
 Mike

 On Tue, May 6, 2014 at 9:28 AM, Michael H. Warfield m...@wittsend.com 
 wrote:
  On Tue, 2014-05-06 at 06:25 -0400, CDR wrote:
  Dear Friends
 
  I succesfully created a SLES 11 SP3 container, but when I try to do this
 
  mount -o loop /images/SLE-11-SP3-SDK-DVD-x86_64-GM-DVD1.iso /media
 
  mount: Could not find any loop device. Maybe this kernel does not know
 about the loop device? (If so, recompile or `modprobe loop'.)
 
  Add the following to your container configuration file:
 
  lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
  lxc.cgroup.devices.allow = b 7:* rwm# loop*
 
  Then make sure you have the following devices in your container /dev
  directory...
 
  brw-rw. 1 root disk  7,   0 May  2 13:03 /dev/loop0
  brw-rw. 1 root disk  7,   1 May  2 13:03 /dev/loop1
  brw-rw. 1 root disk  7,   2 May  2 13:03 /dev/loop2
  brw-rw. 1 root disk  7,   3 May  2 13:03 /dev/loop3
  crw---. 1 root root 10, 237 May  2 13:03 /dev/loop-control
 
  Regards,
  Mike
 
  My host is Fedora 20 and the LXC version is
 
  rpm -qa | grep lxc
  libvirt-daemon-lxc-1.1.3.4-4.fc20.x86_64
  libvirt-daemon-driver-lxc-1.1.3.4-4.fc20.x86_64
  lxc-devel-1.0.0-1.fc20.x86_64
  lxc-debuginfo-1.0.0-1.fc20.x86_64
  lxc-libs-1.0.0-1.fc20.x86_64
  lxc-1.0.0-1.fc20.x86_64
 
  the configuration is:
 
  lxc.start.auto = 0
  lxc.start.delay = 5
  lxc.start.order = 10
 
  # When using LXC with apparmor, uncomment the next line to run unconfined:
  #lxc.aa_profile = unconfined
 
  lxc.cgroup.devices.deny = a
  # /dev/null and zero
  lxc.cgroup.devices.allow = c 1:3 rwm
  lxc.cgroup.devices.allow = c 1:5 rwm
  # consoles
  lxc.cgroup.devices.allow = c 5:1 rwm
  lxc.cgroup.devices.allow = c 5:0 rwm
  lxc.cgroup.devices.allow = c 4:0 rwm
  lxc.cgroup.devices.allow = c 4:1 rwm
  # /dev/{,u}random
  lxc.cgroup.devices.allow = c 1:9 rwm

[lxc-users] LXC Compilation fails under Suse Enterprise

2014-05-05 Thread CDR
I know I am missing something, but I cannot figure it out.
It does compile in Fedora 20

git clone https://github.com/lxc/lxc.git
cd lxc
./autogen.sh
+ test -d autom4te.cache
+ rm -rf autom4te.cache
+ aclocal -I config
configure.ac:205: warning: macro `AM_COND_IF' not found in library
configure.ac:219: warning: macro `AM_COND_IF' not found in library
configure.ac:234: warning: macro `AM_COND_IF' not found in library
configure.ac:252: warning: macro `AM_COND_IF' not found in library
configure.ac:269: warning: macro `AM_COND_IF' not found in library
configure.ac:304: warning: macro `AM_COND_IF' not found in library
configure.ac:315: warning: macro `AM_COND_IF' not found in library
configure.ac:377: warning: macro `AM_COND_IF' not found in library
+ autoheader
+ autoconf
configure.ac:18: error: possibly undefined macro: AC_SUBST
  If this token and others are legitimate, please use m4_pattern_allow.
  See the Autoconf documentation.
configure.ac:36: error: possibly undefined macro: AC_MSG_CHECKING
configure.ac:72: error: possibly undefined macro: AC_MSG_RESULT
configure.ac:118: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:199: error: possibly undefined macro: AC_CHECK_LIB
configure.ac:205: error: possibly undefined macro: AM_COND_IF
configure.ac:206: error: possibly undefined macro: AC_CHECK_HEADER
configure.ac:235: error: possibly undefined macro: PKG_CHECK_MODULES
configure.ac:305: error: possibly undefined macro: AM_PATH_PYTHON
configure.ac:307: error: possibly undefined macro: AC_DEFINE_UNQUOTED
configure.ac:328: error: possibly undefined macro: PKG_CHECK_VAR
+ exit 1
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXC Compilation fails under Suse Enterprise

2014-05-05 Thread CDR
I was under the impression that the LXC group could make this compile
under every major distribution.
Is there any way somebody from the LXC group can research this with
the Suse guys? They surely
will talk to you people, but not to a customer,  unless I pay support fees.
I  imagine I need to install a newer autconf or automake

 rpm -qa | grep auto
automake-1.10.1-4.131.9.1
autoconf-2.63-1.158

Can anybody from the developer's group confirm these versions?

Yours
Philip

On Mon, May 5, 2014 at 11:00 PM, Michael H. Warfield m...@wittsend.com wrote:
 On Mon, 2014-05-05 at 22:23 -0400, CDR wrote:
 I know I am missing something, but I cannot figure it out.
 It does compile in Fedora 20

 You're gonna have to talk to the Suse guys on that one.  I've got some
 of their E-Mail addreses.  I know the original author of the OpenSuse
 template is no longer involved but I know of one or two others who have
 taken his place and may be able to help.

 Regards,
 Mike

 git clone https://github.com/lxc/lxc.git
 cd lxc
 ./autogen.sh
 + test -d autom4te.cache
 + rm -rf autom4te.cache
 + aclocal -I config
 configure.ac:205: warning: macro `AM_COND_IF' not found in library
 configure.ac:219: warning: macro `AM_COND_IF' not found in library
 configure.ac:234: warning: macro `AM_COND_IF' not found in library
 configure.ac:252: warning: macro `AM_COND_IF' not found in library
 configure.ac:269: warning: macro `AM_COND_IF' not found in library
 configure.ac:304: warning: macro `AM_COND_IF' not found in library
 configure.ac:315: warning: macro `AM_COND_IF' not found in library
 configure.ac:377: warning: macro `AM_COND_IF' not found in library
 + autoheader
 + autoconf
 configure.ac:18: error: possibly undefined macro: AC_SUBST
   If this token and others are legitimate, please use m4_pattern_allow.
   See the Autoconf documentation.
 configure.ac:36: error: possibly undefined macro: AC_MSG_CHECKING
 configure.ac:72: error: possibly undefined macro: AC_MSG_RESULT
 configure.ac:118: error: possibly undefined macro: AC_MSG_ERROR
 configure.ac:199: error: possibly undefined macro: AC_CHECK_LIB
 configure.ac:205: error: possibly undefined macro: AM_COND_IF
 configure.ac:206: error: possibly undefined macro: AC_CHECK_HEADER
 configure.ac:235: error: possibly undefined macro: PKG_CHECK_MODULES
 configure.ac:305: error: possibly undefined macro: AM_PATH_PYTHON
 configure.ac:307: error: possibly undefined macro: AC_DEFINE_UNQUOTED
 configure.ac:328: error: possibly undefined macro: PKG_CHECK_VAR
 + exit 1
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

 --
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC NAT failing to forward

2014-05-03 Thread CDR
Dear friends
I got stuck in the simplest part.
First I tried libvirt and using the default network, whic works fine
in virtual machines
I created a Libvir-LSC container, and I can ping the host, DHCP
works,etc., but no forwarding to the network.
A pure LXC container with this network, failed.

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.ipv4 = 0.0.0.0/24

Then I removed that network, undefined it, and created an empty
bridge, br0, and set up a simple iptables script. Note, I am not using
any firewall for my box, only for natting.

#!/bin/sh
iptables -F
iptables -t nat -F

iptables --table nat -o eth1 --append POSTROUTING  -s 192.168.122.0/24
-j MASQUERADE
iptables -A FORWARD -i br0 -o eth1 -m state --state
ESTABLISHED,RELATED -j ACCEPT
iptables-save

the container can ping the default gateway at 192.168.122.1, but
again, no forwarding done.

My kernel hast these configuration


sysctl -A | grep bridge

net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0

sysctl -A | grep forward
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.br0.forwarding = 1
net.ipv4.conf.br0.mc_forwarding = 0
net.ipv4.conf.default.forwarding = 1
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.eth0.forwarding = 1
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.eth1.forwarding = 1
net.ipv4.conf.eth1.mc_forwarding = 0
net.ipv4.conf.lo.forwarding = 1
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.virbr0.forwarding = 1
net.ipv4.conf.virbr0.mc_forwarding = 0
net.ipv4.conf.virbr0-nic.forwarding = 1
net.ipv4.conf.virbr0-nic.mc_forwarding = 0
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_use_pmtu = 0


Can anybody point to what is happening?

Note: if in the pure LXC configuration, it works fine if  I use
lxc.network.type=macvlan
lxc.network.macvlan.mode=bridge
lxc.network.link=eth1
lxc.network.flags=up


Yours
Philip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

  1   2   >