Re: [Lxc-users] bind (re)mount possible?

2013-10-24 Thread Ulli Horlacher
On Thu 2013-10-24 (15:11), Serge Hallyn wrote:

> If your kernel is new enough (check whether /proc/self/ns/mnt exists)
> you could lxc-attach into the container with the -e flag to keep
> elevated privileges, and do the remount.

Ubuntu 12.04:

root@vms3:~# l /proc/self/ns/mnt
l: /proc/self/ns/mnt - No such file or directory

root@vms3:~# uname -a
Linux vms3 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 
x86_64 x86_64 GNU/Linux

What is "new enough"?

So, from the host system, a remount is not possible?


-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.tik.uni-stuttgart.de/
REF:<20131024201132.GA3248@ac100>

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] drop CAP_SYS_RAWIO?

2013-10-24 Thread Ulli Horlacher

So far, I drop these capabilities in my containers to enhance security:

lxc.cap.drop = mac_override
lxc.cap.drop = sys_module
lxc.cap.drop = sys_boot
lxc.cap.drop = sys_admin
lxc.cap.drop = sys_time

What about sys_rawio?
The problem is, this capability allows access to /proc/kcore
Can I drop it or is it necessary for important programs?

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.tik.uni-stuttgart.de/
REF:<20131024071900.gd12...@rus.uni-stuttgart.de>

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] veth vs macvlan

2013-10-23 Thread Ulli Horlacher
So far, I am using my containers with:

lxc.network.type = veth
lxc.network.link = br0
lxc.network.name = eth0

What is the difference ((dis)advantages) to the macvlan interface type?

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.tik.uni-stuttgart.de/
REF:<20131024064951.gb12...@rus.uni-stuttgart.de>

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] bind (re)mount possible?

2013-10-23 Thread Ulli Horlacher
I have a container running with:

root@vms2:/lxc# egrep 'fstab|lxc.cap.drop' fex.cfg 
lxc.mount = /lxc/fex.fstab
lxc.cap.drop = mac_override
lxc.cap.drop = sys_module
lxc.cap.drop = sys_boot
lxc.cap.drop = sys_admin
lxc.cap.drop = sys_time

root@vms2:/lxc# grep /sw fex.fstab
/nfs/rusnas/sw  /lxc/fex/nfs/sw none bind,ro 0 0

The problem is: "ro" for /lxc/fex/nfs/sw is wrong, it should be "rw".
Can I change it without restarting the whole container?
On a normal partition I would execute:
mount -o remount,rw /lxc/fex/nfs/sw

Is this possible with bind mounts for containers, too?

Because of lxc.cap.drop = sys_admin I cannot execute (re)mount commands
inside the container. 

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.tik.uni-stuttgart.de/
REF:<20131023235819.ga12...@rus.uni-stuttgart.de>

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] btrfs snapshots in container?

2013-05-29 Thread Ulli Horlacher
Is it possible to create btrfs snapshots inside a container?
Or should one avoid at all the combination of btrfs and lxc?



-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@tik.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.tik.uni-stuttgart.de/
REF: <20130529141117.ga2...@rus.uni-stuttgart.de>

--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with <2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxcbr0 versus virbr0 (Ubuntu)

2012-11-09 Thread Ulli Horlacher
On Fri 2012-11-09 (08:31), Serge Hallyn wrote:

> Since you have a real bridge, it is better to keep using br0. 

I have just discovered, that br0 is still available!
I was in mistake to think only lxcbr0 and virbr0 are choosable.


> In fact, edit /etc/default/lxc to set USE_LXC_BRIDGE="false" to avoid
> creating lxcbr0 at all.

This is good documented. I found it quick :-)

Is there a comprehensive documentation about Linux bridging in general or
LXC networking in special?

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20121109143150.GB5750@sergelap>

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxcbr0 versus virbr0 (Ubuntu)

2012-11-08 Thread Ulli Horlacher

Prologue: I run LXC successful for nearly 2 years on Ubuntu 10.04, using
veth / br0. Every container has its own IP address, no NAT. I run
production services like http://fex.rus.uni-stuttgart.de/ on it, rocksolid.

I have now set up second server with Ubuntu 12.04 and there have changed a
lot of things, starting with networking.

Reading https://help.ubuntu.com/12.04/serverguide/lxc.html it says
one can use lxcbr0 or virbr0 for bridging, but without further explanation.

What is "better"? Or is lxcbr0 only for NAT?
Is virbr0 the successor of br0?
Probably I am missing some basic documentation...

The new server has six GbE interfaces and I have set up "ethernet bonding":
three real interfaces build one virtual interface. 

I have successfully assigned a single test-IP to bond1:

root@vms3:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
0.0.0.0 129.69.202.142  0.0.0.0 UG10000 bond0
10.0.3.00.0.0.0 255.255.255.0   U 0  00 lxcbr0
129.69.1.0  0.0.0.0 255.255.255.0   U 0  00 bond1
129.69.202.128  0.0.0.0 255.255.255.240 U 0  00 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   00 bond0

(lxcbr0 was automaticaly started when I installed lxc)

How shall I continue?
Binding which bridge type how to bond1?


Below is my current network setup:

root@vms3:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

auto eth3
iface eth3 inet manual
bond-master bond1

auto eth4
iface eth4 inet manual
bond-master bond1

auto eth5
iface eth5 inet manual
bond-master bond1

auto bond0
iface bond0 inet static
address 129.69.202.131
netmask 255.255.255.240
network 129.69.202.128
broadcast 129.69.202.143
gateway 129.69.202.142
bond-mode balance-rr
bond-miimon 100
bond-slaves none

auto bond1
iface bond1 inet static
#up ifconfig bond1 up
address 129.69.1.42
netmask 255.255.255.0
network 129.69.1.0
broadcast 129.69.1.255
bond-mode balance-rr
bond-miimon 100
bond-slaves none



-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20121108165852.ge23...@rus.uni-stuttgart.de>

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Using lxc on production

2012-10-22 Thread Ulli Horlacher
On Mon 2012-10-22 (14:53), Stéphane Graber wrote:

> All in all, that's somewhere around 300-400 containers I'm managing

How do you handle a host (hardware) failure?

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <508541dd.10...@ubuntu.com>

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Using lxc on production

2012-10-22 Thread Ulli Horlacher
On Mon 2012-10-22 (18:09), swair shah wrote:

> I was wondering if anyone is using lxc on production. and if you don't mind
> disclosing, for what purpose do you use it on production?

fex.rus.uni-stuttgart.de is a LXC container and runs smooth for nearly 2
years. It gives more than 300 MB/s for HTTP file transfers

See http://fex.rus.uni-stuttgart.de/ for details


-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Cluster filesystem?

2012-10-08 Thread Ulli Horlacher
On Mon 2012-10-08 (17:16), Papp Tamas wrote:

> On 10/08/2012 05:00 PM, Ulli Horlacher wrote:
> 
> > "should" - I prefer recommendations ny experience :-)
> >
> > I have tried by myself gluster and it is HORRIBLE slow.
> 
> If you are interested, try Moosefs. I have quite good experiences with 
> it, however not under containers.

Moosefs is FUSE based (for clients) and therefore will be very slow.
I suspect NFS is faster, even on (only) GbE.


> multiple mount protection
> 
> You cannot mount the partition multiple times at the same time. It's a
> safe protection. With this trick you can be safe and fast with all the
> benefits of true posix filesystems.

Ubuntu 12.04 does not have ext4 MMP support.
Besides this, I would need n filesystems for n hosts. A failover solution
would be very complex.


-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <5072ee31.8060...@martos.bme.hu>

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Cluster filesystem?

2012-10-08 Thread Ulli Horlacher
On Mon 2012-10-08 (10:32), Papp Tamas wrote:
> On 10/08/2012 09:47 AM, Ulli Horlacher wrote:
> 
> > Are there recommendations on cluster filesystems?
> > I have several hosts with fibre channel. They should use a common
> > filesystem to have a half-automatic fail-over.
> 
> I think you should be able to use any of the cluster FS (eg. gluster, 
> moosefs, GFS... etc.)

"should" - I prefer recommendations ny experience :-)

I have tried by myself gluster and it is HORRIBLE slow.
With GFS I have heard of several fatal crashes with data corruption.


> BTW I would rather use just pure volume's from the storage with ext4 and 
> set MMP (multiple mount protection) on the fs.

What is MMP?

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <50728fa2.4010...@martos.bme.hu>

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Cluster filesystem?

2012-10-08 Thread Ulli Horlacher
Are there recommendations on cluster filesystems?
I have several hosts with fibre channel. They should use a common
filesystem to have a half-automatic fail-over.

-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20121008074735.gb18...@rus.uni-stuttgart.de>

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Executing a command inside a container?

2012-10-08 Thread Ulli Horlacher
On Thu 2012-08-30 (07:48), Serge Hallyn wrote:
> Quoting Dan Kegel (d...@kegel.com):
> 
> > Man, this has got to be an FAQ.
> > 
> > I'm merrily scripting buildbot setup inside a container, and tried to do
> >   sudo lxc-start -n foo  sh my-existing-setup-script.sh
> 
> The answer to this *will* be lxc-attach.  We're just waiting for the
> final kernel bits to filter upstream.
> 
> So yeah ssh is probably the most robust solution for now.

I do it with "lxc -x", example:


root@zoo:~# lxc -x fex uname -a
Linux fex.uni-stuttgart.de 2.6.38-16-server #67~lucid1-Ubuntu SMP Fri Sep 7 
18:36:10 UTC 2012 x86_64 GNU/Linux

root@zoo:~# lxc -h
usage: lxc option
options: -l  list containers
 -L  list containers with disk usage
 -p  list all container processes

usage: lxc [-v] -C container [gateway/net:interface]
options: -v  verbose mode
 -C  create new container clone
example: lxc -C bunny 129.69.8.254/24:br8

usage: lxc [-v] option container
options: -v  verbose mode
 -b  boot container
 -c  connect container console
 -e  edit container configuration
 -x  execute command in container
 -s  shutdown container
 -p  list container processes
 -l  container process list tree


http://fex.rus.uni-stuttgart.de/lxc.html


-- 
Ullrich Horlacher  Informationssysteme und Serverbetrieb
Rechenzentrum IZUS/TIK E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-68565868
Allmandring 30aFax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20120830124825.GA8474@amd1>

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptime

2012-05-15 Thread Ulli Horlacher
On Mon 2012-05-14 (18:25), Papp Tamas wrote:

> > Error message? 
> 
> No error message. Just shows host's uptime.

Return value?

Do you have /dev/pts ?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4fb13202.5090...@martos.bme.hu>

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptime

2012-05-14 Thread Ulli Horlacher
On Mon 2012-05-14 (11:57), Papp Tamas wrote:

> > I have now written a special uptime command to be placed in the containers
> > PATH:
> >
> > #!/usr/bin/perl -w
> >
> > $uptime = `/usr/bin/uptime`;
> > @s = lstat '/dev/pts' or die $uptime;
> > $s = time - $s[10];
> > if ($s>172800) {
> >$d = int($s/86400);
> >$uptime =~ s/up .*?,/up $d days,/;
> > } else {
> >$h = int($s/3600);
> >$m = int(($s-$h*3600)/60);
> >$uptime =~ s/up .*?,/sprintf("up %02d:%02d,",$h,$m)/e;
> > }
> > print $uptime;
> 
> Does this work for you?

Of course. I do not post non-functional code.


> Not for me.

Error message?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4fb0d718.2050...@martos.bme.hu>

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Current status of lxc on ubuntu lucid and red hat 6

2012-05-10 Thread Ulli Horlacher
On Fri 2012-05-11 (02:54),  wrote:

> I have tried starting linux container(lxc 0.7.5) on lucid and red hat 6, but 
> both failed (succeeded in ubuntu precise) 
> The procedure I used is somehow standard:
> 1 using the lxc-ubuntu(not for red hat 6) script to prepare filesystem and 
> then configuring the network part by hand
> 2 lxc-create -n lxc -f config  
> 3 lxc-start -n lxc
> For lucid, lxc-start simpily says "segament fault"

The default Ubuntu 10.04 kernel is too old, I have success with
linux-image-server-lts-backport-natty


See also

http://fex.rus.uni-stuttgart.de/lxc-ubuntu


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] uptime

2012-05-04 Thread Ulli Horlacher
On Fri 2012-05-04 (00:05), Samuel Maftoul wrote:

> Maybe, the uptime of container's init process will show you uptime of the
> container (so is accessible from within the container).

init does not provide its start time 

I have now written a special uptime command to be placed in the containers
PATH:

#!/usr/bin/perl -w

$uptime = `/usr/bin/uptime`;
@s = lstat '/dev/pts' or die $uptime;
$s = time - $s[10];
if ($s>172800) {
  $d = int($s/86400);
  $uptime =~ s/up .*?,/up $d days,/;
} else {
  $h = int($s/3600);
  $m = int(($s-$h*3600)/60);
  $uptime =~ s/up .*?,/sprintf("up %02d:%02d,",$h,$m)/e;
}
print $uptime;


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] uptime

2012-05-03 Thread Ulli Horlacher

"uptime" in a container shows the uptime of the host.
How can one query the uptime of the container?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20120503155228.ga1...@rus.uni-stuttgart.de>

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] How I can get CPU load in LXC container from Host ?

2012-03-04 Thread Ulli Horlacher
On Sun 2012-03-04 (13:49), Xavier Garcia wrote:

> There is any way to execute a command inside a running container from host ?

I have written a small daemon "lxc-cmdd" running inside the container,
which the host-script "lxc" can connect to.
Example:

root@zoo:~# lxc -l
container  disk (MB)RAM (MB)   start-PIDstatus
fex-   81230   running
gopher -   78583   running
ubuntu -   0   0   stopped
vmtest1-   0   0   stopped
vmtest8-   0   0   stopped

root@zoo:~# lxc -x fex "uname -a;uptime"
Linux fex.uni-stuttgart.de 2.6.38-13-server #54~lucid1-Ubuntu SMP Wed Jan 4 
14:38:03 UTC 2012 x86_64 GNU/Linux
 18:25:17 up 19 days,  6:32,  0 users,  load average: 0.00, 0.01, 0.05

root@zoo:~# lxc-ps --lxc | grep cmdd
fex 1344 ?00:00:00 lxc-cmdd
gopher  8623 ?00:00:00 lxc-cmdd


http://fex.rus.uni-stuttgart.de/lxc-ubuntu
http://fex.rus.uni-stuttgart.de/download/lxc.tar

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] nilfs

2012-03-02 Thread Ulli Horlacher
On Fri 2012-03-02 (09:33), John Drescher wrote:
> > Some people have been testing btrfs on 3.1/3.2 kernels (in ubuntu
> > precise) with good results.
> >
> 
> I am using 3.1 / 3.2 kernels on 64 bit gentoo with btrfs at work on 2
> production severs since ~ November of last year. One holds my lxc
> containers for a samba bdc while the other container is a secondary
> dns server.

This is what I wanted to hear :-)

I will try btrfs after upgrading my Ubuntu LTS hosts.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] nilfs

2012-03-02 Thread Ulli Horlacher
On Fri 2012-03-02 (09:02), Daniel Baumann wrote:

> i'm not claiming btrfs is there yet, however, if you're using btrfs, you
> should at least make sure to use something remotely up2date, say 3.2.x.

SLES11 SP2 was released this week with a 3.0 kernel and comes with btrfs.
Same b(*CENSORED*)t as always from SuSE. What they label as "Enterprise"
is "Testing" on Debian.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4f507e79.1030...@progress-technologies.net>

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] nilfs

2012-03-01 Thread Ulli Horlacher
On Fri 2012-03-02 (01:39), Iliyan Stoyanov wrote:

> I'm currently using btrfs raid 1 for a production server with 4 LXC
> containers (SL6.x) on it (old single core opteron w/ 4GB ECC RAM). The
> host is Fedora 16.

I have tested btrfs on a standard Ubuntu 10.04.3 and one with kernel
2.6.38-13-server (backport). Both lead to a fatal kernel loop when doing
rsync of some GB: /var/log/kern.log got filled up until the filesystem was
full, while no program was responsive any more. Had to power off the system.
The developer of btrfs says, it ist still in a "highly experimental
state". Indeed.


> Could you share some of your observations of the nilfs 

I operate it only in a test environment with a test LXC VM and did some IO
tests and benchmarks (like the rsync above): everything works fine, smooth
and fast. Yesterday I deleted unintentionally the LXC container. I really
appreciate the snapshot feature :-)


> and why do you think it could be beneficial for LXC, besides snapshots,

Snapshots. That's it.


> as those can be done with both LVM and btrfs at this point.

LVM needs an extra partition for each snapshot, whereas in NILFS it is
just a subdirectory.
And I dislike to have an extra IO layer to have snapshots.
On other systems like Solaris/ZFS or ONTAP/WAFL, snapshots are an
integrated feature of the filesystems. Makes the handling much easier.
Less complexity is always good, it eliminates potential errors and
problems.

btrfs is not stable enough. See above.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <1330645169.21101.11.camel@tablet>

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] limit number of processes

2012-03-01 Thread Ulli Horlacher
On Tue 2011-10-18 (14:54), Papp Tamas wrote:

> Is it possible to limit the maximum number of processes per container?

I have the same problem. A user has killed the host (and therefore all
containers) with a simple shell command:  :(){ :|:& };:
(Kids, don't try this at home!)


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4e9d7704.7000...@martos.bme.hu>

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] nilfs

2012-03-01 Thread Ulli Horlacher

Has anyone real experience with NILFS (http://www.nilfs.org/)?

A small test of mine with LXC 0.7.5 on Ubuntu 10.04 with NILS was
sucessfull and I really like to have snapshots, but I have reservations to
migrate my LXC production environment from ext4 to NILFS. Changing the
file system is one thing one REALLY have to think of carefully.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20120301172013.gl19...@rus.uni-stuttgart.de>

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC Container inside of a Virtual Machine + problem networking

2012-02-21 Thread Ulli Horlacher
On Tue 2012-02-21 (17:52), Martin Kone?ný wrote:

> I have experience successfully creating an LXC container in many system
> configurations except for when it is inside of a Virtual Machine (Oracle
> VirtualBox).
> 
> More specifically, I have problems with networking.
> 
> Here is my config:
> 
> lxc.network.type = veth
> lxc.network.flags = up
> lxc.network.link = br0
> lxc.network.ipv4 = 0.0.0.0/24

I have had problems with LXC and veth inside a vmware ESX VM. Routing was
not possible. I solved this problem by configuring

lxc.network.type = phys
lxc.network.link  = eth3
lxc.network.name  = eth3

Where eth3 is a virtual ESX interface.
I am not using VirtualBox, but perhaps you can create there also more
virtual ethernet devices,

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] In container some syslog lines garbled

2012-01-23 Thread Ulli Horlacher
On Mon 2012-01-23 (20:12), U.Mutlu wrote:
> In the container some syslog lines are garbled:
> 
> Jan 23 18:41:41 my1 kernel: imklog 5.8.6, log source = /proc/kmsg started.
> Jan 23 18:41:41 my1 rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" 
> x-pid="323" x-info="http://www.rsyslog.com";] start
> Jan 23 18:41:41 my1 /usr/sbin/cron[353]: (CRON) INFO (pidfile fd = 3)
> Jan 23 18:41:41 my1 /usr/sbin/cron[354]: (CRON) STARTUP (fork ok)
> Jan 23 18:41:41 my1 /usr/sbin/cron[354]: (CRON) INFO (Skipping @reboot jobs 
> -- not system startup)
> Jan 23 19:00:11 my1 kernel: 4276 Y U N U=r R=9.6.41 S=4204.3 E=0TS00 RC00 
> T=4I=95 FPOOTPST505DT8 IDW160RS00 Y RP0
> Jan 23 19:00:11 my1 kernel: 49.481 Y U N U=r R=9.6.41 S=4204.5 E=0TS00 RC00 
> T=4I=15 FPOOTPST542DT8 IDW160RS00 Y RP0
> 
> Ie. the last 2 lines above are garbage.
> 
> What's the reason for that?

Bug in LXC.

Has been discussed several times in this list.

My workaround:

Disabling kernel logging in the containers and do forwarding of these
message by host with own daemon (rsyslogfd).


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-console on Debian Squeeze - buggy?

2012-01-06 Thread Ulli Horlacher
On Fri 2012-01-06 (12:08), Whit Blauvelt wrote:

> If 0.7.5. doesn't fully work on 2.6.32, and if backports are available, it's
> too bad neither is mentioned at http://wiki.debian.org/LXC, which is where
> http://lxc.sourceforge.net/ links for Debian-specific info.

I have had similar problems on Ubuntu 10.04

I am now running lxc 0.7.5 (self compiled) with kernel 2.6.38
(linux-image-server-lts-backport-natty) without problems. 

The Ubuntu documentation is very bad on this subject.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20120106170855.ga8...@black.transpect.com>

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-console on Debian Squeeze - both 0.7.2 and 0.7.5 less than happy

2012-01-06 Thread Ulli Horlacher
On Wed 2012-01-04 (14:18), Whit Blauvelt wrote:

> That's enough to get the containers to start from the 0.7.5 lxc-start. But
> it leaves the 0.7.5 lxc-console totally unhappy. It's obviously looking at
> things differently than lxc-start and lxc-info:
> 
> # lxc-info -n xfer
> state:   RUNNING
> pid:  1414
> 
> # lxc-console -n xfer
> lxc-console: 'xfer' is stopped

I have had similar problems and it was my fault:
I have mixed lxc programs from different versions.
First check:
type lxc-info
type lxc-start
type lxc-console


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <20120104191837.ga22...@black.transpect.com>

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Differences between application and system container

2011-12-30 Thread Ulli Horlacher
On Mon 2011-12-26 (18:25), Wai-kit Sze wrote:

> What are the difference between application containers and system
> containers? Both of them can start a command directly.

An application container starts one single program.
A system container starts (boots) a whole linux system.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start and terminal output

2011-12-30 Thread Ulli Horlacher
On Thu 2011-12-29 (19:29), Martin Kone?ný wrote:

> seem to have a problem. Before the break if I started a container using
> lxc-start, boot information would appear on the terminal and I would even
> be able to log in.

lxc-start never provides a console-login, you have to use lxc-console



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] reading hwclock from container

2011-12-18 Thread Ulli Horlacher
On Sat 2011-12-17 (14:38), DTK wrote:
> I am trying to avoid having to install NTP in every container, so I
> came up with following idea.
> 
> Main server cron job
> 
> * 0 *  * * /sbin/hwclock -w
> 
> Each container has following cron job
> 
> 10 0 *  * * /sbin/hwclock -s
> 
> However, the container can't access /dev/rtc and I am guessing that
> has something to do with my cgroup config line:
> 
>lxc.cgroup.devices.allow = c 254:0 rwm # rtc
> 
> Anyway I can fix this? Or get the same result some other way?

Run ntpd on the server host.
time is a kernel issue and you have only one kernel running.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container size minialisation

2011-12-15 Thread Ulli Horlacher
On Tue 2011-12-13 (18:43), Zhu Yanhai wrote:

> My concern is deploying Btrfs only for COW is a really heavy solution
> for this...Is Btrfs ready for production system?

I have tested Btrfs with kernel 2.6.38: copying 30 GB with rsync corrupted
the file system completely and the kernel run into an endless loop writing
huge data to /var/log/syslog while no process was responsive any more.

==> total desaster


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc on MPC platform

2011-12-15 Thread Ulli Horlacher
On Mon 2011-12-12 (23:07), Paulo Rodrigues wrote:

> The support to checkpoint/resume commands will be work in this platform?

checkpoint/resume does not work on any platform so far and it is unsure
when it will come.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4ee6892e.3010...@gmail.com>

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] seeing a network pause when starting and stopping LXCs - how do I stop this ?

2011-12-11 Thread Ulli Horlacher
On Sun 2011-12-11 (19:48), Derek Simkowiak wrote:

>  The problem is not related to the setfd option.  It is caused by 
> the bridge acquired a new MAC address.  Libvirt already has a fix for 
> this, and there is a patch in the works for the LXC tools.

I am wonder why I do not have this problem: I really often start new
containers and I do not have this patch, but no network freeze at all.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4ee57992.3090...@simkowiak.net>

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] seeing a network pause when starting and stopping LXCs - how do I stop this?

2011-12-08 Thread Ulli Horlacher
On Thu 2011-12-08 (07:39), Daniel Lezcano wrote:
> On 12/08/2011 12:38 AM, Joseph Heck wrote:
> 
> > I've been seeing a pause in the whole networking stack when starting
> > and stopping LXC - it seems to be somewhat intermittent, but happens
> > reasonably consistently the first time I start up the LXC.
> >
> > I'm using ubuntu 11.10, which is using LXC 0.7.5
> >
> > I'm starting the container with lxc-start -d -n $CONTAINERNAME
> 
> That could be the bridge configuration. Did you do 'brctl setfd br0 0' ?

I have this in my /etc/network/interfaces (Ubuntu 10.04):

auto br0
iface br0 inet static
address 129.69.1.227
netmask 255.255.255.0
gateway 129.69.1.254
bridge_ports eth0
bridge_stp off
bridge_maxwait 5
post-up /usr/sbin/brctl setfd br0 0


I have never noticed a network freeze and I really often start/stop LXC
containers. Does this "brctl setfd br0 0" prevent the freeze? I do not
remember why I have added it :-}


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <4ee05bad.1020...@free.fr>

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] "PTY allocation request failed on channel 0 - stdin: is not a tty"

2011-11-29 Thread Ulli Horlacher
On Tue 2011-11-29 (17:33), Ulli Horlacher wrote:

> If there is no /etc/init/tty1.conf, then create one. It should contain:
> 
> # tty1 - shell 
> #
> # This service maintains a shell on tty1 from the point the system is
> # started until it is shut down again.
> 
> start on stopped rc RUNLEVEL=[2345]
> stop on runlevel [!2345]
> 
> respawn
> exec /bin/openvt -elfc 1 -- su -l

Of course you need the openvt programm installed. It is in the "kbd"
package. And you have to reboot the container.

To test it without rebooting, use:

ssh root@your_container "/bin/openvt -lfc 1 -- su -l"
lxc-lxc-console -n your_container



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <2029163344.gh32...@rus.uni-stuttgart.de>

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] "PTY allocation request failed on channel 0 - stdin: is not a tty"

2011-11-29 Thread Ulli Horlacher
On Tue 2011-11-29 (10:15), Patrick Kevin McCaffrey wrote:

> I do not have a tty config file.  

Then it is no wonder, you do not have tty access.


> Containers are new to me

console tty is not LXC specific, every standard Linux should have it.

If there is no /etc/init/tty1.conf, then create one. It should contain:

# tty1 - shell 
#
# This service maintains a shell on tty1 from the point the system is
# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]

respawn
exec /bin/openvt -elfc 1 -- su -l


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <865947594.554984.1322583317113.javamail.r...@mail18.pantherlink.uwm.edu>

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] "PTY allocation request failed on channel 0 - stdin: is not a tty"

2011-11-29 Thread Ulli Horlacher
On Tue 2011-11-29 (09:40), Patrick Kevin McCaffrey wrote:

> allows me to SSH into my running container.  However, lxc-console is
> still unresponsive.  When I run the command ("lxc-console -n
> container_name") it says "press ctrl+a q to exit" but anything I do after
> entering the initial command doesn't do anything.  The terminal still
> lets me type, but lxc-console doesn't appear to work.

Is there a getty or login shell running on /dev/tty1 ?

Do you have /etc/init/tty*.conf ?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: <1813177332.553953.1322581202160.javamail.r...@mail18.pantherlink.uwm.edu>

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Creating multiple containers simultaneously

2011-11-28 Thread Ulli Horlacher
On Mon 2011-11-28 (19:09), Roberto Aloi wrote:

> I like the idea. My only concern is how customizable a "clone" is (see
> my second question).

With aptitude: easy.

Besides this you can extend my approch by setting up different templates
(I have no need for this).


> I need to start servers listening on specific ports per each
> container. 

What do you mean exactly? Several server daemons per container or several
containers running (same) server daemons?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Creating multiple containers simultaneously

2011-11-28 Thread Ulli Horlacher
On Mon 2011-11-28 (18:40), Roberto Aloi wrote:

> When I say "customizable" I mean that I should be able to specify a port
> number which a server running inside one of the containers should listen
> to and this number should be different per each container. Would this be
> feasible via LXC? 

This has nothing to do with LXC, but is a configuration item of your
"server". Which software is it?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Creating multiple containers simultaneously

2011-11-28 Thread Ulli Horlacher
On Mon 2011-11-28 (18:40), Roberto Aloi wrote:

> I'm currently dealing with a pool of LXC containers which I use for
> sand-boxing purposes. Sometimes, when re-creating one of the
> containers in the pool I obtain the following error:
> 
> Error: "CREATE CONTAINER container-5\ndebootstrap is
> /usr/sbin/debootstrap\nCache repository is busy.\nfailed to install
> ubuntu natty ...
> 
> My interpretation of the problem is that rebooting several containers
> at the same time can be problematic - e.g. because they are trying to
> access the cache repository at the same time -. Assuming my
> interpretation is correct, is there a way to create multiple
> containers at the same time?

You mix up creating with booting.
You ceate new containers with debootstrap and running debootstrap several
times at the same time is not a good idea. 

My approach is another: I create ONCE an Ubuntu template (*) which I can
then clone as much as I want. Example:


root@vms1:/lxc# lxc -l
container  disk (MB)RAM (MB)   start-PIDstatus
bunny  -   0   0   running
fex-   0   0   stopped
ubuntu -   0   0   stopped
vmtest8-   0   0   running

root@vms1:/lxc# lxc -C vmtest1

root@vms1:/lxc# lxc -l
container  disk (MB)RAM (MB)   start-PIDstatus
bunny  -   0   0   running
fex-   0   0   stopped
ubuntu -   0   0   stopped
vmtest1-   0   0   stopped
vmtest8-   0   0   running



> Also, the creation time for the first container is pretty high (~8
> minutes)

I can create Ubuntu untainers in less than 3 seconds with my methode.

(*) http://fex.rus.uni-stuttgart.de/lxc-ubuntu

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/
REF: 

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mountall mounts /dev from host machine

2011-11-13 Thread Ulli Horlacher
On Sun 2011-11-13 (15:06), Arie Skliarouk wrote:

> > > Where is the command "lxc" located? It is not provided by the
> > lxc-0.7.5.tar.gz...
> >
> > > It's all in http://fex.rus.uni-stuttgart.de/lxc-ubuntu
> >
> >
> 
> Ah, I see it now. It requires some daemon lxc-cmdd to be running on the
> guest. Where do I take the daemon from?

It's all in http://fex.rus.uni-stuttgart.de/lxc-ubuntu


> Will the approach be official way to shutdown the guest servers gratefully?

What is "official"?


> Is there a wiki or some documentation on the method?

http://fex.rus.uni-stuttgart.de/lxc-ubuntu


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mountall mounts /dev from host machine

2011-11-13 Thread Ulli Horlacher
On Sun 2011-11-13 (12:39), Arie Skliarouk wrote:

> > > Still, how can I gratefully stop the ubuntu 10.04 containers from the
> > host
> > > machine?
> >
> > Use lxc -s container
> >
> 
> Where is the command "lxc" located? It is not provided by the
> lxc-0.7.5.tar.gz...

Quoting myself:

> It's all in http://fex.rus.uni-stuttgart.de/lxc-ubuntu

(see bottom)


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mountall mounts /dev from host machine

2011-11-11 Thread Ulli Horlacher
On Thu 2011-11-10 (14:29), Arie Skliarouk wrote:
> My mistake, this was possible with ubuntu 8.10 based containers and is no
> longer possible with 10.04 containers. Not related to the recent changes.

ubuntu 8.10 uses init wheras ubuntu 10.04 uses upstart.


> Still, how can I gratefully stop the ubuntu 10.04 containers from the host
> machine?

Use lxc -s container

root@vms2:~# lxc -h
usage: lxc option
options: -l  list containers
 -L  list containers with disk usage
 -p  list all container processes

usage: lxc [-v] -C container [gateway/net:interface]
options: -v  verbose mode
 -C  create new container clone
example: lxc -C bunny 129.69.8.254/24:br8

usage: lxc [-v] option container
options: -v  verbose mode
 -b  boot container
 -c  connect container console
 -e  edit container configuration
 -x  execute command in container
 -s  shutdown container
 -p  list container processes
 -l  container process list tree


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mountall mounts /dev from host machine

2011-11-09 Thread Ulli Horlacher
On Wed 2011-11-09 (18:15), Arie Skliarouk wrote:

> It's all in http://fex.rus.uni-stuttgart.de/lxc-ubuntu
> 
> Whoa, so complicated!

It is a full installation setup. 


> Just a small question - how do I specify netmask for the IP number in the
> lxc.conf file?

CIDR-Notation. In example:

lxc.network.ipv4 = 129.69.19.100/27


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] note on using rsyslog in a container

2011-11-07 Thread Ulli Horlacher
On Tue 2011-01-11 (02:54), Mike wrote:

> I noticed netfilter messages getting trashed in the various 
> /var/log/messages on a system with two containers, netfilter rules on 
> the host, and each container and the host running rsyslog.  On closer 
> inspection, I realized that only every other character or so of the 
> message was appearing in a given log file. 

Today I fall into the same pit, thanks to the list archive I found your
workaround:

> Disabling kernel logging in the containers, by commenting out "$ModLoad
> imklog" in /etc/rsyslog.conf, straightened out the log files.

Now only the host gets the netfilter (iptables) log messages.
Not quite what I want...
Will this issue be fixed in the future?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Problem with lxc-attach on Linux v3.1.0

2011-11-04 Thread Ulli Horlacher
On Thu 2011-09-08 (10:05), Nikhil Handigol wrote:
> As an additional datapoint, lxc-attach (from lxc-0.7.2) had worked for me
> with linux-2.6.35 with the corresponding setns patches (attached).
> 
> Now I need to upgrade the kernel (ideally, to 2.6.38 or 3.0) and I've been
> unable to get lxc-attach to work with the newer kernel.

Instead of hassling with patching (every time) the kernel you can instead
use my lxc script (*) which can execute commands inside a running
container with:

lxc -x container command options...

Example:

root@vms1:/lxc# lxc -L
container  disk (MB)RAM (MB)   start-PIDstatus
bunny552   0   0   stopped
fex42372   0   0   stopped
ubuntu 0   0   0   stopped
vmtest1  552   5   22304   running
vmtest8  552   0   0   stopped

root@vms1:/lxc# uname -a
Linux vms1 2.6.38-12-server #51~lucid1-Ubuntu SMP Thu Sep 29 20:09:53 UTC 2011 
x86_64 GNU/Linux

root@vms1:/lxc# lxc -x vmtest1 uname -a
Linux vmtest1 2.6.38-12-server #51~lucid1-Ubuntu SMP Thu Sep 29 20:09:53 UTC 
2011 x86_64 GNU/Linux


(*) http://fex.rus.uni-stuttgart.de/lxc.html

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] How to execute smbtorture command using lxc-execute?

2011-11-04 Thread Ulli Horlacher
On Fri 2011-11-04 (11:32), nishant mungse wrote:

> And when is use lxc-start -n base and start the container base and then
> when I add user, it is shows the entry of user in /home of container and
> also in /etc/group of container.
> 
> My doubt is when i use lxc-execute and add user why it not adding the entry
> in /etc/group of container and why it is adding when i use lxc-start?

lxc-execute operates on a different container than lxc-start.
With lxc-execute you cannot start a programm (like useradd) inside a
already running container.

lxc-start is for booting a whole linux system while lxc-execute starts
only a single programm inside an isolated environment ("container" in lxc
terminolgy).

The lxc-execute man-page needs more explanation on this point.

If you want to execute a programm inside a running container, you have to
use ssh or my lxc script:

lxc -x containter useradd ...

http://fex.rus.uni-stuttgart.de/lxc.html


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mountall mounts /dev from host machine

2011-11-01 Thread Ulli Horlacher
On Thu 2011-10-27 (11:17), Arie Skliarouk wrote:
> I tried that and now the container does not start at all. I traced the
> problem to the following command in the /etc/init/lxc.conf script
> initctl emit filesystem --no-wait

Is there an error in your logfile?


> /etc/init/hostname.conf also fails on command
> exec hostname -b -F /etc/hostname

You cannot set a hostname any longer with:

lxc.cap.drop = sys_admin


But this should not prevent the container from booting.
If it fails, you should have an error message.


> Would you be so kind as to tar up your /etc/init directory and send it to me
> please?

It's all in http://fex.rus.uni-stuttgart.de/lxc-ubuntu

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
RSA® Conference 2012
Save $700 by Nov 18
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc.cap.drop

2011-10-26 Thread Ulli Horlacher
On Wed 2011-10-26 (10:41), Sebastien Pahl wrote:
> Here are all the caps that I managed to drop:
> 
> audit_control
> audit_write

What is "kernel auditing"?


> mac_admin
> mac_override

I thought first, this means ethernet MAC, but it is mandatory access
control! Which programs do this use?



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxc.cap.drop

2011-10-26 Thread Ulli Horlacher

Is there a "best practises" for lxc.cap.drop configuration?

I have so far as default:

# no MAC change
lxc.cap.drop = mac_override

# no kernel module (un)loading
lxc.cap.drop = sys_module

# no reboot
lxc.cap.drop = sys_boot

# no (un/re)mounting
lxc.cap.drop = sys_admin

# no time setting
lxc.cap.drop = sys_time


All the corresponding tasks should be done via host and not via container.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] mountall mounts /dev from host machine

2011-10-26 Thread Ulli Horlacher
On Wed 2011-10-26 (18:35), Arie Skliarouk wrote:
> Hi,
> 
> On one of my ubuntu 10.04 vservers mountall mounts /dev from the host
> machine. This causes problems for syslogd that works over /dev/log.
> The vserver has properly populated /dev directory, it just mounts /dev from
> host on top of it.
> 
> I don't know how to disable this.

I have in the container config files:

lxc.cap.drop = sys_admin

Which prevents mounting by the container at all.

File systems are mounted at lxc start via container.fstab, for example:

root@vms2:/lxc# cat fex.fstab 
none /lxc/fex/dev/pts   devpts  defaults 0 0
none /lxc/fex/proc  procdefaults 0 0
none /lxc/fex/sys   sysfs   defaults 0 0
none /lxc/fex/var/lock  tmpfs   defaults 0 0
none /lxc/fex/var/run   tmpfs   defaults 0 0
/lxc/share  /lxc/fex/share  nonebind 0 0


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-25 Thread Ulli Horlacher
On Tue 2011-10-25 (08:58), Jean-Philippe Menil wrote:

> Do you use the recent match in your iptables rules?

THIS was the decisive tip!

After commenting out the "iptables -m recent" rules in the container
boot configuration, the host does not crash any more on lxc-stop!

I can live without the iptables recent config for the next time, but I
hope this kernel bug will be fixed in the future.

Shall I submit it as a kernel bug? Where?



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-25 Thread Ulli Horlacher
On Tue 2011-10-25 (12:50), Joerg Gollnick wrote:
> Am Dienstag, 25. Oktober 2011, 12:36:36 schrieb Ulli Horlacher:
> > On Tue 2011-10-25 (09:11), Joerg Gollnick wrote:
> > > Try to modprobe nfnetfilter as early as possible in user space (Ubuntu
> > > hint add to /etc/modules).
> > 
> > There is no such module:
> > 
> > root@vms1:/etc# lsmod | grep filter
> > iptable_filter 12810  1
> > ip_tables  27177  2 iptable_nat,iptable_filter
> > x_tables   29521  8
> > ipt_MASQUERADE,iptable_nat,ipt_REJECT,xt_state,ipt_LOG,xt_tcpudp,iptable_fi
> > lter,ip_tables
> > 
> > root@vms1:/etc# modprobe nfnetfilter
> > FATAL: Module nfnetfilter not found.
> > 
> > root@vms1:/etc# locate nfnetfilter
> > root@vms1:/etc# uname -a
> > Linux vms1 2.6.38-12-server #51~lucid1-Ubuntu SMP Thu Sep 29 20:09:53 UTC
> > 2011 x86_64 GNU/Linux
> Sorry my fault, module should read as nfnetlink.

Does not work :-(

Added nfnetlink to /etc/modules and rebooted:

root@vms1:~# lsmod | grep nfn
nfnetlink  14327  0

root@vms1:/lxc# lxc-start -f fex.cfg -n fex -d -o fex.log
root@vms1:/lxc# lxc -l
container  disk (MB)RAM (MB)   start-PIDstatus
bunny  -   0   0   stopped
fex-   33337   running
ubuntu -   0   0   stopped
vmtest1-   0   0   stopped
vmtest8-   0   0   stopped

root@vms1:/lxc# lxc-stop -n fex

... and crash, again.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-25 Thread Ulli Horlacher
On Tue 2011-10-25 (08:58), Jean-Philippe Menil wrote:

> your kernel seems to have CONFIG_NETFILTER_XT_MATCH_RECENT set?

root@vms1:/etc# uname -a
Linux vms1 2.6.38-12-server #51~lucid1-Ubuntu SMP Thu Sep 29 20:09:53 UTC 2011 
x86_64 GNU/Linux

root@vms1:/etc# grep CONFIG_NETFILTER_XT_MATCH_RECENT 
/boot/config-2.6.38-12-server
CONFIG_NETFILTER_XT_MATCH_RECENT=m


> Do you use the recent match in your iptables rules?

Yes.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-25 Thread Ulli Horlacher
On Tue 2011-10-25 (09:11), Joerg Gollnick wrote:

> Try to modprobe nfnetfilter as early as possible in user space (Ubuntu hint 
> add 
> to /etc/modules).

There is no such module:

root@vms1:/etc# lsmod | grep filter
iptable_filter 12810  1 
ip_tables  27177  2 iptable_nat,iptable_filter
x_tables   29521  8 
ipt_MASQUERADE,iptable_nat,ipt_REJECT,xt_state,ipt_LOG,xt_tcpudp,iptable_filter,ip_tables

root@vms1:/etc# modprobe nfnetfilter
FATAL: Module nfnetfilter not found.

root@vms1:/etc# locate nfnetfilter
root@vms1:/etc# uname -a
Linux vms1 2.6.38-12-server #51~lucid1-Ubuntu SMP Thu Sep 29 20:09:53 UTC 2011 
x86_64 GNU/Linux

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-24 Thread Ulli Horlacher
On Mon 2011-10-24 (18:56), Ulli Horlacher wrote:

> vms1 is an Ubuntu 10.04 based host system (4 * Xeon 64bit) with:
> 
> root@vms1:/lxc# uname -a
> Linux vms1 2.6.38-11-server #50~lucid1-Ubuntu SMP Tue Sep 13 22:10:53 UTC 
> 2011 x86_64 GNU/Linux

Today 2.6.38-12-server has come.

> But when I try to stop this container with:
> 
> root@vms1:/lxc# lxc-stop -n fex
> 
> the host (vms1) crashes with a kernel traceback.

The bug is still there. But I was able to localize what triggers this bug:
I am able to start/stop the container if I do not use iptables inside the
container. When I set my ipfilter rules with iptables and then try to stop
the container, the host crashes again.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-24 Thread Ulli Horlacher
On Mon 2011-10-24 (21:53), Joerg Gollnick wrote:

> I triggered a slightly different issue in nfnetfilter. I worked around this 
> by 
> loading nfnetfilter before any other module in this complex.

What is nfnetfilter and where do you load it?

I have no such module:

root@vms1:~# lsmod | grep -i filter
iptable_filter 12810  2
ip_tables  27177  2 iptable_nat,iptable_filter
x_tables   29521  9 
xt_recent,ipt_MASQUERADE,iptable_nat,ipt_REJECT,xt_state,ipt_LOG,xt_tcpudp,iptable_filter,ip_tables


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-24 Thread Ulli Horlacher
On Mon 2011-10-24 (21:04), Daniel Lezcano wrote:
> On 10/24/2011 08:40 PM, Jean-Philippe Menil wrote:
> > Le 24/10/2011 19:46, Ulli Horlacher a écrit :
> >
> >> 2011-10-24 19:34:40 [  318.526208] br0: port 2(veth2WqDOb) entering 
> >> forwarding state
> >> 2011-10-24 19:34:40 [  318.675038] br0: port 2(veth2WqDOb) entering 
> >> disabled state
> >> 2011-10-24 19:34:40 [  318.703903] [ cut here ]
> >> 2011-10-24 19:34:40 [  318.703960] kernel BUG at 
> >> /build/buildd/linux-lts-backport-maverick-2.6.35/net/netfilter/xt_recent.c:609!
> > Hi,
> >
> > try to load netconsole with appropriate config instead of screenshot.
> > It's a know bug with kernel < 2.6.37,
> 
> It seems this bug appears with a 2.6.38-11 kernel version also.

Yes, see my first mail on this subject: 

root@vms1:/lxc# uname -a  
Linux vms1 2.6.38-11-server #50~lucid1-Ubuntu SMP Tue Sep 13 22:10:53 UTC 2011 
x86_64 GNU/Linux


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-24 Thread Ulli Horlacher
On Mon 2011-10-24 (20:56), Daniel Lezcano wrote:

> > I have now booted vms1 with kernel 2.6.35 instead of 2.6.38 (as before)/
> > This kernel crashes also on lxc-stop but it writes something to
> > /var/log/kern.log :
> >
(...)
> > 2011-10-24 19:34:40 [  318.711984] ---[ end trace 20014711382a5389 ]---
> 
> Do you have also the "fixing recursive fault but reboot is needed" right
> after the "end trace" ?

No, that was all in the log.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-stop crashes the host

2011-10-24 Thread Ulli Horlacher
On Mon 2011-10-24 (12:33), Serge E. Hallyn wrote:

> > http://fex.rus.uni-stuttgart.de/tmp/vms1-crash.png
> > 
> > It's a pity, but this console server (HP IP console) cannot log ASCII
> > based, it is GUI only. I can make only screenshots and cannot scroll back,
> > so the beginning of the kernel crash message is missing.
> > 
> > Any tips for debugging or even problem solving?
> 
> Can you use some screencast program to grab the video as the error passes
> by on the gui?  Then export a .jpg from the screencast video?

I have now booted vms1 with kernel 2.6.35 instead of 2.6.38 (as before)/
This kernel crashes also on lxc-stop but it writes something to
/var/log/kern.log :

2011-10-24 19:34:40 [  318.526208] br0: port 2(veth2WqDOb) entering forwarding 
state
2011-10-24 19:34:40 [  318.675038] br0: port 2(veth2WqDOb) entering disabled 
state
2011-10-24 19:34:40 [  318.703903] [ cut here ]
2011-10-24 19:34:40 [  318.703960] kernel BUG at 
/build/buildd/linux-lts-backport-maverick-2.6.35/net/netfilter/xt_recent.c:609!
2011-10-24 19:34:40 [  318.704017] invalid opcode:  [#1] SMP 
2011-10-24 19:34:40 [  318.704137] last sysfs file: 
/sys/devices/system/cpu/cpu3/cache/index1/shared_cpu_map
2011-10-24 19:34:40 [  318.704189] CPU 3 
2011-10-24 19:34:40 [  318.704231] Modules linked in: xt_recent veth btrfs 
zlib_deflate crc32c libcrc32c ufs qnx4 hfsplus hfs minix ntfs vfat msdos fat 
jfs xfs reiserfs nfs fscache pci_stub vboxpci vboxnetadp vboxnetflt vboxdrv 
nfsd lockd nfs_acl auth_rpcgss sunrpc exportfs ipt_MASQUERADE iptable_nat 
nf_nat ipt_REJECT kvm_intel kvm nf_conntrack_ipv4 nf_defrag_ipv4 xt_state 
nf_conntrack ipt_LOG xt_tcpudp iptable_filter ip_tables x_tables bridge 8021q 
garp stp ppdev parport_pc i5000_edac edac_core i5k_amb psmouse serio_raw shpchp 
lp parport tg3 floppy megaraid_sas
2011-10-24 19:34:40 [  318.706762] 
2011-10-24 19:34:40 [  318.706806] Pid: 21, comm: netns Not tainted 
2.6.35-30-server #60~lucid1-Ubuntu D2119/PRIMERGY RX300 S3   
2011-10-24 19:34:40 [  318.706861] RIP: 0010:[]  
[] recent_net_exit+0x3c/0x40 [xt_recent]
2011-10-24 19:34:40 [  318.706960] RSP: 0018:880236d67d90  EFLAGS: 00010283
2011-10-24 19:34:40 [  318.707008] RAX: 88022c0a46e0 RBX: a08ec860 
RCX: 0281
2011-10-24 19:34:40 [  318.707059] RDX: 880235ba5200 RSI: 880236d67dd0 
RDI: 88022a6b8880
2011-10-24 19:34:40 [  318.707124] RBP: 880236d67d90 R08: f000f000 
R09: 
2011-10-24 19:34:40 [  318.707189] R10: 88022a6c4000 R11: ffc8ffc8 
R12: 88022a6b8880
2011-10-24 19:34:40 [  318.707253] R13: 880236d67dd0 R14: 880001e18dc0 
R15: 880236d67fd8
2011-10-24 19:34:40 [  318.707319] FS:  () 
GS:880001f8() knlGS:
2011-10-24 19:34:40 [  318.707400] CS:  0010 DS:  ES:  CR0: 
8005003b
2011-10-24 19:34:40 [  318.707463] CR2: 7f0c32bf61e0 CR3: 000232f69000 
CR4: 06e0
2011-10-24 19:34:40 [  318.707528] DR0:  DR1:  
DR2: 
2011-10-24 19:34:40 [  318.707593] DR3:  DR6: 0ff0 
DR7: 0400
2011-10-24 19:34:40 [  318.707659] Process netns (pid: 21, threadinfo 
880236d66000, task 880236d5c4d0)
2011-10-24 19:34:40 [  318.707738] Stack:
2011-10-24 19:34:40 [  318.707793]  880236d67dc0 814ac4a6 
880236d67da0 880236d67dd0
2011-10-24 19:34:40 [  318.707970] <0> a08ec860 814ac780 
880236d67e00 814ac88b
2011-10-24 19:34:40 [  318.708234] <0> 88022a6b88a8 88022a6b88a8 
88022a6b8898 88022a6b8898
2011-10-24 19:34:40 [  318.708547] Call Trace:
2011-10-24 19:34:40 [  318.708613]  [] ops_exit_list+0x36/0x70
2011-10-24 19:34:40 [  318.708677]  [] ? cleanup_net+0x0/0x1c0
2011-10-24 19:34:40 [  318.708741]  [] cleanup_net+0x10b/0x1c0
2011-10-24 19:34:40 [  318.708808]  [] 
run_workqueue+0xc5/0x1a0
2011-10-24 19:34:40 [  318.708872]  [] 
worker_thread+0xa3/0x110
2011-10-24 19:34:40 [  318.708936]  [] ? 
autoremove_wake_function+0x0/0x40
2011-10-24 19:34:40 [  318.709002]  [] ? 
worker_thread+0x0/0x110
2011-10-24 19:34:40 [  318.709066]  [] kthread+0x96/0xa0
2011-10-24 19:34:40 [  318.709131]  [] 
kernel_thread_helper+0x4/0x10
2011-10-24 19:34:40 [  318.709195]  [] ? kthread+0x0/0xa0
2011-10-24 19:34:40 [  318.709257]  [] ? 
kernel_thread_helper+0x0/0x10
2011-10-24 19:34:40 [  318.709320] Code: 97 48 08 00 00 85 c0 74 1e 3b 02 77 1a 
48 98 48 8b 44 c2 10 48 3b 00 75 12 48 c7 c6 52 c6 8e a0 e8 8a b3 8c e0 c9 c3 
0f 0b eb fe <0f> 0b eb fe 55 48 89 e5 53 48 83 ec 08 0f 1f 44 00 00 8b 05 74 
2011-10-24 19:34:40 [  318.711821] RIP  [] 
recent_net_exit+0x3c/0x40 [xt_recent]
2011-10-24 19:34:40 [  318.711924]  RSP 
2011-10-24 19:34:40 [  318.711984] ---[ end trace 20014711382a5389 ]---

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.un

[Lxc-users] lxc-stop crashes the host

2011-10-24 Thread Ulli Horlacher

vms1 is an Ubuntu 10.04 based host system (4 * Xeon 64bit) with:

root@vms1:/lxc# uname -a
Linux vms1 2.6.38-11-server #50~lucid1-Ubuntu SMP Tue Sep 13 22:10:53 UTC 2011 
x86_64 GNU/Linux

root@vms1:/lxc# lxc-version 
lxc version: 0.7.5


I can start (Ubuntu 10.04) containers without problems:

root@vms1:/lxc# lxc-start -f fex.cfg -n fex -d -o fex.log

root@vms1:/lxc# lxc-info -n fex
state:   RUNNING
pid:  4073


But when I try to stop this container with:

root@vms1:/lxc# lxc-stop -n fex

the host (vms1) crashes with a kernel traceback.

After reboot of vms1 no crash traces are found in /var/log/

I have attached vms1 to a console server, where I can make screenshots:

http://fex.rus.uni-stuttgart.de/tmp/vms1-crash.png

It's a pity, but this console server (HP IP console) cannot log ASCII
based, it is GUI only. I can make only screenshots and cannot scroll back,
so the beginning of the kernel crash message is missing.

Any tips for debugging or even problem solving?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Live Migration of LXC

2011-10-24 Thread Ulli Horlacher
On Mon 2011-10-24 (12:03), Greg Kurz wrote:

> C/R and live migration is a complicated matter for LXC containers.

I have assumed nothing else...


> No status for the moment... I guess people who really want migration
> should participate

Not every LXC (admin-)user is a kernel hacker, too. I am fluent in Perl
programming, but not in C.



> at least to show kernel maintainers there's a demand for it.

How can we do this? Send mass e-mails (spam) to the kernel maintainers? :-)



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Live Migration of LXC

2011-10-24 Thread Ulli Horlacher
 
On Sat 2010-03-20 (21:30), Daniel Lezcano wrote:

> There will be available a kernel patchset in a few weeks making possible 
> to do live migration with the lxc userspace tools (not libvirt), with 
> some restrictions.

We are now some few months later ... :-)
How is the status about this?


> So within 2/3 months, we should have lxc with 2 checkpoint/restart 
> solutions embedded in it.

And here, too?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?! (partially solved)

2011-10-21 Thread Ulli Horlacher
On Thu 2011-10-20 (09:00), Papp Tamas wrote:

> Well, I don't see anything like this. Actually I use 0.7.5. Try to upgrade.

After upgrading to lxc 0.7.5 the problem is still there:
I cannot start any container and there is no (log) output at all. For
every lxc-start command I get a new veth interface and the lxc-start
process is not killable (uninterruptable waiting for IO).

At this point I gave up and tried the Windows problem solving methode:
rebooting (the host server).

After reboot, I can start and stop containers without any problems.
Everything works fine, as it should.

I am not happy with this state: I do not know what went wrong and I have
no solution if this problems reappears, besides rebooting, which will
terminate all other container VMs, too. This is a NO-GO for a production
environment!

I have now installed linux-image-server-lts-backport-natty (Linux 2.6.38)
and hope (*) this fixes the bug.


(*) Hope and faith belongs to the church and not to a computing centre.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-0.7.5.tar.gz does not install man-pages

2011-10-21 Thread Ulli Horlacher
On Fri 2011-10-21 (11:18), Ulli Horlacher wrote:

> Does someone have the man-pages for 0.7.5?

Found it by myself. With:

docbook-to-man lxc-cgroup.sgml >lxc-cgroup.1
docbook-to-man lxc-checkpoint.sgml >lxc-checkpoint.1
docbook-to-man lxc.conf.sgml >lxc.conf.5
docbook-to-man lxc-console.sgml >lxc-console.1
docbook-to-man lxc-create.sgml >lxc-create.1
docbook-to-man lxc-destroy.sgml >lxc-destroy.1
docbook-to-man lxc-execute.sgml >lxc-execute.1
docbook-to-man lxc-freeze.sgml >lxc-freeze.1
docbook-to-man lxc-kill.sgml >lxc-kill.1
docbook-to-man lxc-ls.sgml >lxc-ls.1
docbook-to-man lxc-monitor.sgml >lxc-monitor.1
docbook-to-man lxc-ps.sgml >lxc-ps.1
docbook-to-man lxc-restart.sgml >lxc-restart.1
docbook-to-man lxc.sgml >lxc.1
docbook-to-man lxc-start.sgml >lxc-start.1
docbook-to-man lxc-stop.sgml >lxc-stop.1
docbook-to-man lxc-unfreeze.sgml >lxc-unfreeze.1
docbook-to-man lxc-wait.sgml >lxc-wait.1

one can create the misssing man-pages within the doc-directory.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-0.7.5.tar.gz does not install man-pages

2011-10-21 Thread Ulli Horlacher
On Fri 2011-10-21 (00:06), Ulli Horlacher wrote:
> On Thu 2011-10-20 (23:16), Matteo Bernardini wrote:
> 
> > still guessing, the problem can be your configure options:
> 
> I have only --prefix=/opt/lxc  nothing else
> 
> Are the man-pages anywhere ready for download?

http://lxc.sourceforge.net/man/

"lxc man pages, generated manually from the lxc version 0.7.0."


Does someone have the man-pages for 0.7.5?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-0.7.5.tar.gz does not install man-pages

2011-10-20 Thread Ulli Horlacher
On Thu 2011-10-20 (23:16), Matteo Bernardini wrote:

> still guessing, the problem can be your configure options:

I have only --prefix=/opt/lxc  nothing else

Are the man-pages anywhere ready for download?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-20 Thread Ulli Horlacher
On Thu 2011-10-20 (16:39), Ulli Horlacher wrote:
> On Thu 2011-10-20 (09:18), Serge E. Hallyn wrote:
> 
> > > And everytime I run lxc-start I get a new veth interface:
> > > 
> > > root@vms1:/lxc# ifconfig | grep veth
> > > vethCmnezx Link encap:Ethernet  HWaddr 3e:d6:06:4e:26:ae
> > > vethFGQBYd Link encap:Ethernet  HWaddr fe:0e:3c:f1:15:8c
> > > vethL8qOhT Link encap:Ethernet  HWaddr de:55:6e:db:82:7a
> > > vethMBfmpb Link encap:Ethernet  HWaddr 4a:00:a6:e0:ce:b8
> > > vethMwcqoU Link encap:Ethernet  HWaddr a6:d9:b8:d1:37:77
> > > vethOYkLQf Link encap:Ethernet  HWaddr 7a:3a:bd:cd:d0:51
> > > vethP1BDUb Link encap:Ethernet  HWaddr 52:de:98:d8:5a:71
> > > 
> > > 
> > > Any idea?
> > 
> > Ah, that's an old kernel bug.  Someone (Daniel?) should remember where it
> > got fixed offhand.
> 
> root@vms1:/opt/src# uname -a
> Linux vms1 2.6.35-30-server #60~lucid1-Ubuntu SMP Tue Sep 20 22:28:40 UTC 
> 2011 x86_64 GNU/Linux
> 
> root@vms1:/opt/src# dpkg -l | grep linux-image
> ii  linux-image-2.6.35-30-server   2.6.35-30.60~lucid1
>  Linux kernel image for version 2.6.35 on x86_64
> ii  linux-image-server-lts-backport-maverick   2.6.35.30.38   
>  Linux kernel image on Server Equipment.

Which kernel should I use instead?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-0.7.5.tar.gz does not install man-pages

2011-10-20 Thread Ulli Horlacher
On Thu 2011-10-20 (19:39), Matteo Bernardini wrote:

> > linuxdoc-tools and docbook2man were missing, but after installing them, I
> > still get no lxc man-pages after a newly:
> > 
> > root@vms1:/opt/src/lxc-0.7.5# make install
> > (...)
> > root@vms1:/opt/src/lxc-0.7.5# find /opt/lxc-0.7.5/share/man/
> > /opt/lxc-0.7.5/share/man/
> > /opt/lxc-0.7.5/share/man/man7
> > /opt/lxc-0.7.5/share/man/man5
> > /opt/lxc-0.7.5/share/man/man1
> 
> sorry, have you tried re-running configure?

deleted /opt/src/lxc-0.7.5 and new configure & make install ==>
still no man-pages


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-0.7.5.tar.gz does not install man-pages

2011-10-20 Thread Ulli Horlacher
On Thu 2011-10-20 (17:39), Matteo Bernardini wrote:
> I got them fine here, maybe you miss/got problems with linuxdoc-tools
> (that contains docbook2man)?

linuxdoc-tools and docbook2man were missing, but after installing them, I
still get no lxc man-pages after a newly:

root@vms1:/opt/src/lxc-0.7.5# make install
(...)
root@vms1:/opt/src/lxc-0.7.5# find /opt/lxc-0.7.5/share/man/
/opt/lxc-0.7.5/share/man/
/opt/lxc-0.7.5/share/man/man7
/opt/lxc-0.7.5/share/man/man5
/opt/lxc-0.7.5/share/man/man1


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-20 Thread Ulli Horlacher
On Thu 2011-10-20 (09:18), Serge E. Hallyn wrote:

> > And everytime I run lxc-start I get a new veth interface:
> > 
> > root@vms1:/lxc# ifconfig | grep veth
> > vethCmnezx Link encap:Ethernet  HWaddr 3e:d6:06:4e:26:ae
> > vethFGQBYd Link encap:Ethernet  HWaddr fe:0e:3c:f1:15:8c
> > vethL8qOhT Link encap:Ethernet  HWaddr de:55:6e:db:82:7a
> > vethMBfmpb Link encap:Ethernet  HWaddr 4a:00:a6:e0:ce:b8
> > vethMwcqoU Link encap:Ethernet  HWaddr a6:d9:b8:d1:37:77
> > vethOYkLQf Link encap:Ethernet  HWaddr 7a:3a:bd:cd:d0:51
> > vethP1BDUb Link encap:Ethernet  HWaddr 52:de:98:d8:5a:71
> > 
> > 
> > Any idea?
> 
> Ah, that's an old kernel bug.  Someone (Daniel?) should remember where it
> got fixed offhand.

root@vms1:/opt/src# uname -a
Linux vms1 2.6.35-30-server #60~lucid1-Ubuntu SMP Tue Sep 20 22:28:40 UTC 2011 
x86_64 GNU/Linux

root@vms1:/opt/src# dpkg -l | grep linux-image
ii  linux-image-2.6.35-30-server   2.6.35-30.60~lucid1  
   Linux kernel image for version 2.6.35 on x86_64
ii  linux-image-server-lts-backport-maverick   2.6.35.30.38 
   Linux kernel image on Server Equipment.

root@vms1:/opt/src# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 10.04.3 LTS
Release:10.04
Codename:   lucid

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxc-0.7.5.tar.gz does not install man-pages

2011-10-20 Thread Ulli Horlacher

I have just installed lxc-0.7.5, but I get no man-pages:

root@vms1:/opt/src/lxc-0.7.5# make install
(...)
test -z "/opt/lxc-0.7.5/share/man/man1" || /bin/mkdir -p 
"/opt/lxc-0.7.5/share/man/man1"
test -z "/opt/lxc-0.7.5/share/man/man5" || /bin/mkdir -p 
"/opt/lxc-0.7.5/share/man/man5"
test -z "/opt/lxc-0.7.5/share/man/man7" || /bin/mkdir -p 
"/opt/lxc-0.7.5/share/man/man7"
make[3]: Leaving directory `/opt/src/lxc-0.7.5/doc'
make[2]: Leaving directory `/opt/src/lxc-0.7.5/doc'
make[1]: Leaving directory `/opt/src/lxc-0.7.5/doc'
make[1]: Entering directory `/opt/src/lxc-0.7.5'
make[2]: Entering directory `/opt/src/lxc-0.7.5'
make[2]: Nothing to be done for `install-exec-am'.
test -z "/opt/lxc-0.7.5/share/pkgconfig" || /bin/mkdir -p 
"/opt/lxc-0.7.5/share/pkgconfig"
 /usr/bin/install -c -m 644 lxc.pc '/opt/lxc-0.7.5/share/pkgconfig'
make[2]: Leaving directory `/opt/src/lxc-0.7.5'
make[1]: Leaving directory `/opt/src/lxc-0.7.5'

root@vms1:/opt/src/lxc-0.7.5# find /opt/lxc-0.7.5/share/man
/opt/lxc-0.7.5/share/man
/opt/lxc-0.7.5/share/man/man7
/opt/lxc-0.7.5/share/man/man5
/opt/lxc-0.7.5/share/man/man1

Is there a bug in the Makefile/configure?

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-20 Thread Ulli Horlacher
On Thu 2011-10-20 (09:00), Papp Tamas wrote:
> On 10/20/2011 12:54 AM, Ulli Horlacher wrote:
> 
> > On Wed 2011-10-19 (22:11), Papp Tamas wrote:
> >
> >> What version of lxc package do you use?
> > See my first mail:
> >
> > lxc version: 0.7.4.1
> 
> Well, I don't see anything like this. Actually I use 0.7.5. Try to upgrade.

0.7.5 is out? Ok, I will install it!


> What do you see in system logs?

How stupid of me! I should have checked it first.

lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d -o /data/lxc/vmtest1.log

/var/log/kern.log :

2011-10-20 15:44:39 [856474.455886] device vethP1BDUb entered promiscuous mode
2011-10-20 15:44:39 [856474.457199] ADDRCONF(NETDEV_UP): vethP1BDUb: link is 
not ready
2011-10-20 15:44:43 [856478.670026] unregister_netdevice: waiting for lo to 
become free. Usage count = 3
2011-10-20 15:44:54 [856488.810020] unregister_netdevice: waiting for lo to 
become free. Usage count = 3
2011-10-20 15:45:04 [856498.950026] unregister_netdevice: waiting for lo to 
become free. Usage count = 3
2011-10-20 15:45:14 [856509.090021] unregister_netdevice: waiting for lo to 
become free. Usage count = 3
2011-10-20 15:45:24 [856519.230023] unregister_netdevice: waiting for lo to 
become free. Usage count = 3
2011-10-20 15:45:34 [856529.370022] unregister_netdevice: waiting for lo to 
become free. Usage count = 3
(...)

And everytime I run lxc-start I get a new veth interface:

root@vms1:/lxc# ifconfig | grep veth
vethCmnezx Link encap:Ethernet  HWaddr 3e:d6:06:4e:26:ae
vethFGQBYd Link encap:Ethernet  HWaddr fe:0e:3c:f1:15:8c
vethL8qOhT Link encap:Ethernet  HWaddr de:55:6e:db:82:7a
vethMBfmpb Link encap:Ethernet  HWaddr 4a:00:a6:e0:ce:b8
vethMwcqoU Link encap:Ethernet  HWaddr a6:d9:b8:d1:37:77
vethOYkLQf Link encap:Ethernet  HWaddr 7a:3a:bd:cd:d0:51
vethP1BDUb Link encap:Ethernet  HWaddr 52:de:98:d8:5a:71


Any idea?

> My is guess this is not an lxc bug or problem, but something about the 
> underlaying OS or HW.

Hardware should be not the reason, because I have this bug on two servers
from different vendors (Intel/Dell). Only the software and configuration
is identical.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-20 Thread Ulli Horlacher
On Wed 2011-10-19 (19:24), Ulli Horlacher wrote:
> Besides my problem with "cannot stop/kill lxc-start" (see other mail), I
> have now an even more severe problem: I cannot start ANY container anymore!

I have a second Ubuntu based LXC server host (which itself is a ESX VM)
with same configuration, where I was able to start a container, but only
once: after lxc-stop a new lxc-start fails:

root@zoo:/lxc# lxc-start -f /lxc/vmtest1.cfg -n vmtest1 -d -o /lxc/vmtest1.log
root@zoo:/lxc# lxc-info -n vmtest1
'vmtest1' is RUNNING

root@zoo:/lxc# ping vmtest1
PING vmtest1.rus.uni-stuttgart.de (129.69.1.42) 56(84) bytes of data.
64 bytes from vmtest1.rus.uni-stuttgart.de (129.69.1.42): icmp_seq=1 ttl=64 
time=17.6 ms
64 bytes from vmtest1.rus.uni-stuttgart.de (129.69.1.42): icmp_seq=2 ttl=64 
time=0.152 ms

root@zoo:/lxc# lxc-stop -n vmtest1
root@zoo:/lxc# lxc-start -f /lxc/vmtest1.cfg -n vmtest1 -d -o /lxc/vmtest1.log 
root@zoo:/lxc# lxc-info -n vmtest1 
'vmtest1' is STOPPED

root@zoo:/lxc# cat vmtest1.log
  lxc-start 1319104844.997 ERRORlxc_start - inherited fd 3 on 
pipe:[393452]
  lxc-start 1319105525.397 ERRORlxc_start - inherited fd 3 on 
pipe:[393452]


A reboot of server zoo does not help :-(


WTF!?!




-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-20 Thread Ulli Horlacher
On Wed 2011-10-19 (19:24), Ulli Horlacher wrote:

> But nothing happens, there is only a lxc-start process dangling around:
> 
> root@vms1:/lxc# psg vmtest1
> USER   PID  PPID %CPUVSZ COMMAND
> root 31571 1  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n 
> vmtest1 -d -o /data/lxc/vmtest1.log

I forgot to query the process state. 
Here it is (meanwhile there are more hanging processes):

root@vms1:~# ps -eo user,pid,ppid,s,pcpu,vsz,args|grep lxc
root  2171 1 D  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n 
vmtest1 -l DEBUG -d -o /data/lxc/vmtest1.log
root  2573 1 D  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n 
vmtest1 -l DEBUG -o /data/lxc/vmtest1.log
root 30375 1 D  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log
root 30812 1 D  0.0  20872 lxc-start -f /data/lxc/vmtest8.cfg -n 
vmtest8 -d -o /data/lxc/vmtest8.log
root 31571 1 D  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n 
vmtest1 -d -o /data/lxc/vmtest1.log

D ==> uninterruptible sleep

This is the reason why I cannot kill these processes.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-19 Thread Ulli Horlacher
On Wed 2011-10-19 (22:11), Papp Tamas wrote:

> What version of lxc package do you use?

See my first mail:

lxc version: 0.7.4.1


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-19 Thread Ulli Horlacher
On Wed 2011-10-19 (21:24), Papp Tamas wrote:
> On 10/19/2011 09:18 PM, Ulli Horlacher wrote:
> 
> > root@vms1:/lxc# ps axf | grep vmtest1
> > 31571 ?Ds 0:00 lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d 
> > -o /data/lxc/vmtest1.log
> >   2171 ?Ds 0:00 lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 
> > -l DEBUG -d -o /data/lxc/vmtest1.log
> 
> Do not run two instances at the same time.

As I wrote in my other mail:

I cannot stop these lxc-start processes any more!
Neither with lxc-stop nor with kill -9 !

There is something terribly wrong!

And some hours ago everything went fine! I made some kind of mistake - but
which one?  I have not upgraded the lxc tools or kernel, I did nor reboot
the host server.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-19 Thread Ulli Horlacher
On Wed 2011-10-19 (13:49), Brian K. White wrote:


> I haven't scrutinized your info in detail but one quick question, did 
> you have vsftpd running in the containers

No. I have no ftp-service running at all.


> If you change the name of the container it will create a new cgroup 
> based on the new name. That would allow you to start it again without 
> rebooting the host. Not exactly elegant.

No, I cannot start any container, even new ones (newly created).


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Ciosco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] cannot start any more any container?!

2011-10-19 Thread Ulli Horlacher
On Wed 2011-10-19 (19:37), Papp Tamas wrote:

> > Besides my problem with "cannot stop/kill lxc-start" (see other mail), I
> > have now an even more severe problem: I cannot start ANY container anymore!
> >
> > I boot the container with:
> >
> > root@vms1:/lxc# lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d -o 
> > /data/lxc/vmtest1.log
> 
> What is in log if you don't use -d but -l DEBUG?

  lxc-start 1319050532.778 DEBUGlxc_conf - allocated pty '/dev/pts/17' 
(4/5)
  lxc-start 1319050532.778 DEBUGlxc_conf - allocated pty '/dev/pts/18' 
(6/7)
  lxc-start 1319050532.778 DEBUGlxc_conf - allocated pty '/dev/pts/19' 
(8/9)
  lxc-start 1319050532.778 DEBUGlxc_conf - allocated pty '/dev/pts/20' 
(10/11)
  lxc-start 1319050532.778 INFO lxc_conf - tty's configured
  lxc-start 1319050532.778 DEBUGlxc_console - using '/dev/null' as 
console
  lxc-start 1319050532.778 DEBUGlxc_start - sigchild handler set
  lxc-start 1319050532.778 INFO lxc_start - 'vmtest1' is initialized
  lxc-start 1319050532.784 DEBUGlxc_conf - instanciated veth 
'vethPjfwg9/vethYBPlDf', index is '19'

That's all.


> > But nothing happens, there is only a lxc-start process dangling around:
> >
> > root@vms1:/lxc# psg vmtest1
> > USER   PID  PPID %CPUVSZ COMMAND
> > root 31571 1  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n 
> > vmtest1 -d -o /data/lxc/vmtest1.log
> 
> ps axf ?

root@vms1:/lxc# ps axf | grep vmtest1
31571 ?Ds 0:00 lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d -o 
/data/lxc/vmtest1.log
 2171 ?Ds 0:00 lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -l 
DEBUG -d -o /data/lxc/vmtest1.log


> > The logfile is empty:
> >
> > root@vms1:/lxc# l vmtest1.log
> > -RW-   0 2011-10-19 19:09 vmtest1.log
> 
> What about strace -ff -s 1 ?

root@vms1:/lxc# strace -o /tmp/lxc.log -ff -s 1 lxc-start -f 
/data/lxc/vmtest1.cfg -n vmtest1 -l DEBUG -o /data/lxc/vmtest1.log &

Uhh...? Does not write /tmp/lxc.log !

Ok, then with:

root@vms1:/lxc# strace -ff -s 1 lxc-start -f /data/lxc/vmtest1.cfg -n 
vmtest1 -l DEBUG -o /data/lxc/vmtest1.log 2>/tmp/lxc.log &

The contents of /tmp/lxc.log does not make much sense to me. It grows
rapidly, after a few seconds it has 177 MB and so I terminated the strace.

execve("/opt/lxc-0.7.4.1/bin/lxc-start", ["lxc-start", "-f", 
"/data/lxc/vmtest1.cfg", "-n", "vmtest1", "-l", "DEBUG", "-o", 
"/data/lxc/vmtest1.log"], [/* 31 vars */]) = 0
access("/etc/ld.so.nohwcap", F_OK)  = -1 ENOENT (No such file or directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7fc3ed14b000
access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or directory)
open("/opt/lxc-0.7.4.1/lib/tls/x86_64/liblxc.so.0", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat("/opt/lxc-0.7.4.1/lib/tls/x86_64", 0x7fff7b417ba0) = -1 ENOENT (No such 
file or directory)
open("/opt/lxc-0.7.4.1/lib/tls/liblxc.so.0", O_RDONLY) = -1 ENOENT (No such 
file or directory)
stat("/opt/lxc-0.7.4.1/lib/tls", 0x7fff7b417ba0) = -1 ENOENT (No such file or 
directory)
open("/opt/lxc-0.7.4.1/lib/x86_64/liblxc.so.0", O_RDONLY) = -1 ENOENT (No such 
file or directory)
stat("/opt/lxc-0.7.4.1/lib/x86_64", 0x7fff7b417ba0) = -1 ENOENT (No such file 
or directory)
open("/opt/lxc-0.7.4.1/lib/liblxc.so.0", O_RDONLY) = 3
(...)
open("/dev/tty", O_RDWR|O_CREAT|O_APPEND|O_CLOEXEC, 0600) = 14
getuid()= 0
write(3, "  lxc-start 1319051254.870 DEBUGlxc_console - using 
'/dev/tty' as console\n", 82) = 82
ioctl(14, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) 
= 0
ioctl(14, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) 
= 0
ioctl(14, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) 
= 0
ioctl(14, SNDCTL_TMR_CONTINUE or TCSETSF, {B38400 opost -isig -icanon -echo 
...}) = ? ERESTARTSYS (To be restarted)
--- SIGTTOU (Stopped (tty output)) @ 0 (0) ---
--- SIGTTOU (Stopped (tty output)) @ 0 (0) ---
ioctl(14, SNDCTL_TMR_CONTINUE or TCSETSF, {B38400 opost -isig -icanon -echo 
...}) = ? ERESTARTSYS (To be restarted)
--- SIGTTOU (Stopped (tty output)) @ 0 (0) ---
--- SIGTTOU (Stopped (tty output)) @ 0 (0) ---

The last 3 lines are then repeated endless.

And the load goes up:

root@vms1:/lxc# uptime 
 21:13:28 up 9 days,  3:23,  2 users,  load average: 6.01, 5.95, 5.22

The host machine is still responsive, I cannot see a CPU munching
process with top.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
The demand for IT networking professionals continues to

[Lxc-users] cannot start any more any container?!

2011-10-19 Thread Ulli Horlacher
Besides my problem with "cannot stop/kill lxc-start" (see other mail), I
have now an even more severe problem: I cannot start ANY container anymore!

I am sure I have overlooked something, but I cannot see what. I am really
desperate now, because this happens to my production environment!

Server host is:

root@vms1:/lxc# lsb_release -a; uname -a; lxc-version 
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 10.04.3 LTS
Release:10.04
Codename:   lucid
Linux vms1 2.6.35-30-server #60~lucid1-Ubuntu SMP Tue Sep 20 22:28:40 UTC 2011 
x86_64 GNU/Linux
lxc version: 0.7.4.1

(linux-image-server-lts-backport-maverick)

All my lxc files reside in /lxc :

root@vms1:/lxc# l vmtest1*
dRWX   - 2011-05-17 19:47 vmtest1
-RWT   1,127 2011-10-19 18:54 vmtest1.cfg
-RW- 476 2011-10-19 18:54 vmtest1.fstab

I boot the container with:

root@vms1:/lxc# lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 -d -o 
/data/lxc/vmtest1.log


But nothing happens, there is only a lxc-start process dangling around:

root@vms1:/lxc# psg vmtest1
USER   PID  PPID %CPUVSZ COMMAND
root 31571 1  0.0  20872 lxc-start -f /data/lxc/vmtest1.cfg -n vmtest1 
-d -o /data/lxc/vmtest1.log

The logfile is empty:

root@vms1:/lxc# l vmtest1.log
-RW-   0 2011-10-19 19:09 vmtest1.log


And no corresponding /cgroup/vmtest1 entry:

root@vms1:/lxc# l /cgroup/
dRWX   - 2011-10-10 17:50 /cgroup/2004
dRWX   - 2011-10-10 17:50 /cgroup/2017
dRWX   - 2011-10-10 17:50 /cgroup/libvirt
-RW-   0 2011-10-10 17:50 /cgroup/cgroup.event_control
-RW-   0 2011-10-10 17:50 /cgroup/cgroup.procs
-RW-   0 2011-10-10 17:50 /cgroup/cpu.rt_period_us
-RW-   0 2011-10-10 17:50 /cgroup/cpu.rt_runtime_us
-RW-   0 2011-10-10 17:50 /cgroup/cpu.shares
-RW-   0 2011-10-10 17:50 /cgroup/cpuacct.stat
-RW-   0 2011-10-10 17:50 /cgroup/cpuacct.usage
-RW-   0 2011-10-10 17:50 /cgroup/cpuacct.usage_percpu
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.cpu_exclusive
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.cpus
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.mem_exclusive
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.mem_hardwall
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_migrate
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_pressure
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_pressure_enabled
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_spread_page
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.memory_spread_slab
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.mems
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.sched_load_balance
-RW-   0 2011-10-10 17:50 /cgroup/cpuset.sched_relax_domain_level
-RW-   0 2011-10-10 17:50 /cgroup/devices.allow
-RW-   0 2011-10-10 17:50 /cgroup/devices.deny
-RW-   0 2011-10-10 17:50 /cgroup/devices.list
-RW-   0 2011-10-10 17:50 /cgroup/memory.failcnt
-RW-   0 2011-10-10 17:50 /cgroup/memory.force_empty
-RW-   0 2011-10-10 17:50 /cgroup/memory.limit_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.max_usage_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.failcnt
-RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.limit_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.max_usage_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.memsw.usage_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.move_charge_at_immigrate
-RW-   0 2011-10-10 17:50 /cgroup/memory.oom_control
-RW-   0 2011-10-10 17:50 /cgroup/memory.soft_limit_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.stat
-RW-   0 2011-10-10 17:50 /cgroup/memory.swappiness
-RW-   0 2011-10-10 17:50 /cgroup/memory.usage_in_bytes
-RW-   0 2011-10-10 17:50 /cgroup/memory.use_hierarchy
-RW-   0 2011-10-10 17:50 /cgroup/net_cls.classid
-RW-   0 2011-10-10 17:50 /cgroup/notify_on_release
-RW-   0 2011-10-10 17:50 /cgroup/release_agent
-RW-   0 2011-10-10 17:50 /cgroup/tasks

At last the container config file:

lxc.utsname = vmtest1
lxc.tty = 4
lxc.pts = 1024
lxc.network.type = veth
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.flags = up
lxc.network.mtu = 1500
lxc.network.ipv4 = 129.69.1.42/24
lxc.rootfs = /lxc/vmtest1
lxc.mount = /lxc/vmtest1.fstab
# which CPUs
lxc.cgroup.cpuset.cpus = 1,2,3
lxc.cgroup.cpu.shares = 1024
# http://www.mjmwired.net/kernel/Documentation/cgroups/memory.txt
lxc.cgroup.memory.limit_in_bytes = 512M
lxc.cgroup.memory.memsw.limit_in_bytes = 512M
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.

[Lxc-users] cannot stop/kill lxc-start

2011-10-19 Thread Ulli Horlacher

I have some lxc-start processes which I cannot stop or kill:


root@vms2:/lxc# psg bunny
USER   PID  PPID %CPUVSZ COMMAND
root 27033 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log
root 26620 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log

root@vms2:/lxc# lxc-stop -n bunny -o /dev/tty
root@vms2:/lxc# echo $?
0

root@vms2:/lxc# psg bunny
USER   PID  PPID %CPUVSZ COMMAND
root 27033 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log
root 26620 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log

root@vms2:/lxc# kill 27033 26620
root@vms2:/lxc# psg bunny
USER   PID  PPID %CPUVSZ COMMAND
root 27033 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log
root 26620 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log

root@vms2:/lxc# kill -9 27033 26620
root@vms2:/lxc# psg bunny
USER   PID  PPID %CPUVSZ COMMAND
root 27033 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log
root 26620 1  0.0  20872 lxc-start -f /data/lxc/bunny.cfg -n bunny -d 
-o /data/lxc/bunny.log


root@vms2:/lxc# uname -a; lxc-version
Linux vms2 2.6.35-30-server #59~lucid1-Ubuntu SMP Thu Sep 1 19:39:17 UTC 2011 
x86_64 GNU/Linux
lxc version: 0.7.4.1


I do not want to reboot the server host, because there are others
containers running in production state.

How can I terminate these dangling lxc-start processes?

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Graceful shutdowns: current best practices?

2011-10-18 Thread Ulli Horlacher
On Tue 2011-10-18 (15:22), Derek Simkowiak wrote:
> What is the best method for gracefully shutting down LXC containers 
> in a production environment?

I use "lxc -s container" which itself executes a "shutdown -h now" via
cmdd, see: http://fex.rus.uni-stuttgart.de/lxc.html


> lxc-attach -n CONTAINER shutdown -h now
> 
>  Is there any drawback to doing that, instead? 

root@vms2:~# lxc-attach -n bunny -- shutdown -h now
lxc-attach: Does this kernel version support 'attach' ?
lxc-attach: failed to enter the namespace

root@vms2:~# uname -a
Linux vms2 2.6.35-30-server #59~lucid1-Ubuntu SMP Thu Sep 1 19:39:17 UTC 2011 
x86_64 GNU/Linux


> 3. An "official" command name for graceful shutdowns from the host.  I 
> propose lxc-shutdown.  (There is an unofficial OpenSuse package from 
> rdannert that has a "lxc-shutdown-all" command, but I have not seen the 
> name "lxc-shutdown" used anywhere.)
> 
> 4. Which signal?  SIGINT?  SIGPWR?  Both?

Does only work for init based systems, not for upstart, like Ubuntu!


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ubuntu /etc/init.d/xinetd kills container's xinetd

2011-10-06 Thread Ulli Horlacher
On Thu 2011-10-06 (09:14), Ulli Horlacher wrote:

> > Then attach the patch to the bug making sure that it's flagged as a 
> > patch. This should ensure someone will look at it, sadly not for Oneiric 
> > (11.10) but hopefully for Precise (12.04).
> > 
> > Launchpad lets you mark a bug as affecting multiple packages, so I'd 
> > suggest you add a "task" to any other package showing the same bug 
> 
> ok, done.

Result:

From: Robie Basak <868...@bugs.launchpad.net>
To: frams...@rus.uni-stuttgart.de
Subject: [Bug 868538] Re:  /etc/init.d/xinetd kills LXC container's 
xinetd
Date: Thu, 06 Oct 2011 09:41:03 -

Setting Importance to Low as this bug applies only to an unusual
configuration and there is a workaround available.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Ubuntu /etc/init.d/xinetd kills container's xinetd

2011-10-06 Thread Ulli Horlacher
On Wed 2011-10-05 (21:48), Stéphane Graber wrote:

> > The problem is: when I stop xinetd on the host with command
> > "/etc/init.d/xinetd stop"
> > this stops all LXC container xinetd processes, too!

> Can you file a bug here: http://launchpad.net/ubuntu/+source/xinetd/+filebug

I have already done it yesterday.


> Then attach the patch to the bug making sure that it's flagged as a 
> patch. This should ensure someone will look at it, sadly not for Oneiric 
> (11.10) but hopefully for Precise (12.04).
> 
> Launchpad lets you mark a bug as affecting multiple packages, so I'd 
> suggest you add a "task" to any other package showing the same bug 

ok, done.


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Ubuntu /etc/init.d/xinetd kills container's xinetd

2011-10-05 Thread Ulli Horlacher

I have an Ubuntu LXC hosts with several containers running internet
services via xinetd.

Sometimes the container services died without any reason and no logfile
entry.  First, I thought LXC is not that stable as I hoped, but now I
found the bug inside /etc/init.d/xinetd !

The problem is: when I stop xinetd on the host with command
"/etc/init.d/xinetd stop" 
this stops all LXC container xinetd processes, too!

/etc/init.d/xinetd contains bad code which does not respect the xinetd
pidfile. See "man man start-stop-daemon":

  Note: unless --pidfile is specified, start-stop-daemon behaves similar
  to killall(1).  start-stop-daemon will scan the process table looking
  for any processes which match the process name (...)

The following patch prevents this unwanted behaviour:

--- /tmp/xinetd 2011-10-05 18:08:13.0 +0200
+++ xinetd  2011-10-05 18:23:19.0 +0200
@@ -17,7 +17,7 @@
 DAEMON=/usr/sbin/$NAME
 PIDFILE=/var/run/$NAME.pid
 
-test -x "$DAEMON" || exit 0
+test -x $DAEMON || exit 0
 
 test -e /etc/default/$NAME && . /etc/default/$NAME
 case "$INETD_COMPAT" in
@@ -47,18 +47,20 @@
 start)
 checkportmap
 log_daemon_msg "Starting internet superserver" "$NAME"
-start-stop-daemon --start --quiet --background --exec "$DAEMON" -- \
--pidfile "$PIDFILE" $XINETD_OPTS
+start-stop-daemon --start --pidfile $PIDFILE --quiet --background \
+  --exec $DAEMON -- -pidfile $PIDFILE $XINETD_OPTS
 log_end_msg $?
 ;;
 stop)
 log_daemon_msg "Stopping internet superserver" "$NAME"
-start-stop-daemon --stop --signal 3 --quiet --oknodo --exec "$DAEMON"
+start-stop-daemon --stop --pidfile $PIDFILE --signal 3 --quiet \
+  --oknodo --exec $DAEMON
 log_end_msg $?
 ;;
 reload)
 log_daemon_msg "Reloading internet superserver configuration" "$NAME"
-start-stop-daemon --stop --signal 1 --quiet --oknodo --exec "$DAEMON"
+start-stop-daemon --stop --pidfile $PIDFILE --signal 1 --quiet \
+  --oknodo --exec $DAEMON
 log_end_msg $?
 ;;
 restart|force-reload)
@@ -66,7 +68,7 @@
 $0 start
 ;;
 status)
-   status_of_proc -p "$PIDFILE" "$DAEMON" xinetd && exit 0 || exit $?
+   status_of_proc -p $PIDFILE $DAEMON xinetd && exit 0 || exit $?
;;
 *)
 echo "Usage: /etc/init.d/xinetd 
{start|stop|reload|force-reload|restart|status}"


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] New LXC Creation Script: lxc-ubuntu-x

2011-10-05 Thread Ulli Horlacher

On Thu 2011-09-29 (18:05), Derek Simkowiak wrote:
> Hello,
>  I have just published a new Open Source LXC container creation 
> script, called lxc-ubuntu-x.  It implements all the latest "best 
> practices" I found on the web, and introduces some new features.  I am 
> using this script in a production environment, and I invite you to check 
> it out:
> 
> http://derek.simkowiak.net/lxc-ubuntu-x/
> 
>  It currently generates Ubuntu or Debian containers.
> 
>  I created this because the scripts and tutorials I found on the web 
> all had shortcomings of one form or another.  For example, many blogs 
> recommend mounting filesystems within the container's init (which does 
> not allow for a shared read-only mount, because root can simply remount 
> it).  So, this script uses an external fstab file.  Also:
> 
> - It creates a random MAC address with a high vendor address, to 
> workaround Launchpad bug #58404
> - It generates new (unique) SSH host keys and SSL certificates for each 
> new container
> - It applies all necessary dev, mtab, and init script fixes, including 
> booting containers with Upstart
> - It is fully non-interactive; it allows for automatic generation of 
> containers. (Getting this to work was surprisingly difficult!)
> - It restricts container "capabilities" as much as possible by default
> - It creates a default user, sets his password, installs any SSH 
> "authorized_keys" file you want, and adds him to the sudo admin group.

Besides the last step, I have it all in my solution which I have posted to
the list several months ago:

http://fex.rus.uni-stuttgart.de/lxc.html

Plus: I can execute any command inside a container without ssh.



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] stopping a container

2011-09-05 Thread Ulli Horlacher
On Mon 2011-09-05 (09:24), Papp Tamas wrote:

> On 09/05/2011 08:38 AM, Jäkel, Guido wrote:

>> Another (planned) way is to use lxc-execute, but this is still not
>> working. Ulli Hornbacher therefore wrote it's own workaround: A little
>> daemon executes all command pushed in by a command running at the host --
>> disregarding to all aspects of security.

Only root of the host has write permission to the lxc-cmdd fifo. If you
want more security you have to use VMS :-)


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] stopping a container

2011-09-04 Thread Ulli Horlacher
On Fri 2011-09-02 (18:33), Matteo Bernardini wrote:

> personally, I have no problem in ssh'ing in the container and halt it. :)

This needs sshd running inside the container and correct routing.

I use a small lxc-cmdd which let me do: lxc -s vm_name

Indeed, this calls lxc -x which can execute any command inside the
container. In this case a halt command.

See:  http://fex.rus.uni-stuttgart.de/lxc.html


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free "Love Thy Logs" t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [RFC] best way to add creation of lvm containers

2011-08-24 Thread Ulli Horlacher
On Fri 2011-07-01 (12:31), Serge E. Hallyn wrote:

> so lxc-clone will create a snapshot-based clone of an lvm-backed
> container in about a second.

My "lxc" script (*) can do this in 2 seconds, without bothering LVM:

root@vms2:/lxc# lxc
usage: lxc option
options: -l  list containers
 -p  list all container processes

usage: lxc [-v] -C container [gateway/net]
options: -v  verbose mode
 -C  create new container clone

usage: lxc [-v] option container
options: -v  verbose mode
 -b  boot container
 -c  connect container console
 -e  edit container configuration
 -x  execute command in container
 -s  shutdown container
 -p  list container processes
 -l  container process list tree

root@vms2:/lxc# lxc -l
container  disk (MB)RAM (MB)   start-PIDstatus
fex57341 1132589   running
ubuntu   553   0   0   stopped
vmtest8  515   0   0   stopped

root@vms2:/lxc# time lxc -C bunny 129.69.8.254/24

real0m1.822s
user0m0.080s
sys 0m1.380s

root@vms2:/lxc# lxc -b bunny
root@vms2:/lxc# lxc -l  
container  disk (MB)RAM (MB)   start-PIDstatus
bunny553   5   12784   running
fex57350 1222589   running
ubuntu   553   0   0   stopped
vmtest8  515   0   0   stopped


(*) http://fex.rus.uni-stuttgart.de/lxc.html

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] security question

2011-08-19 Thread Ulli Horlacher
On Fri 2011-08-19 (15:38), Dong-In David Kang wrote:

> We've found out that inside of an LXC instance, root can insert/remove 
> modules of the host.
> Is it normal?
> If it is doable, an LXC image may corrupt the host system, which is not good 
> in terms of security.

Put:

lxc.cap.drop = sys_module

to your LXC container config file.
And by the way:

lxc.cap.drop = sys_admin

is also a good idea, to prevent that the container root can modify mount
options, for example set the container filesystem to read-only, which can
effect ALL containers!


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC vs ESX

2011-06-04 Thread Ulli Horlacher
On Sat 2011-06-04 (11:38), Gordon Henderson wrote:

> However I guess it's just for university types - those with the benefits 
> of Gb upload speeds... The poor people without that benefit - and the 
> majority will have sub 1Mb/sec upload speeds 

Many home users in Germany have upload speeds at 20 Mb/s. As far as I
know standard connection for South Korea home users is 100 Mb/s.

Besides this all German universities and most big companies have 1 Gb/s
and above (eg my university has 40 Gb/s).

So, it is good to know to have software which supports such fast links.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC vs ESX

2011-06-04 Thread Ulli Horlacher

On Mon 2011-05-23 (13:22), Ulli Horlacher wrote:
> A small network application benchmark between LXC and VMware ESX:
> 
> 
> ESX:
> 
> framstag@diaspora:~: fexsend  -i unifex /tmp/2GB.tmp .
> Server/User: http://fex.uni-stuttgart.de/frams...@rus.uni-stuttgart.de
> /tmp/2GB.tmp : 2048 MB in 87 s (24105 kB/s)
> 
> 
> LXC:
> 
> framstag@diaspora:~: fexsend  -i flupp /tmp/2GB.tmp .
> Server/User: http://flupp/frams...@rus.uni-stuttgart.de
> /tmp/2GB.tmp : 2048 MB in 24 s (87381 kB/s)

I have now coupled both:

The F*EX service http://fex.uni-stuttgart.de/index.html runs on Ubuntu in
LXC on ESX. The throuput is as expected the same as with Ubuntu on ESX
alone.

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument

2011-06-01 Thread Ulli Horlacher
On Wed 2011-06-01 (14:40), Daniel Lezcano wrote:

> > root@vms2:~# lxc-checkconfig
> > Kernel config /proc/config.gz not found, looking in other places...
> > Found kernel config file /boot/config-2.6.32-32-server
> > --- Namespaces ---
> > Namespaces: enabled
> > Utsname namespace: enabled
> > Ipc namespace: enabled
> > Pid namespace: enabled
> > User namespace: enabled
> > Network namespace: missing
> > ^^
> > A!
> >
> > Can I enable it at runtime or is it a compile time feature?
> 
> It is a compile feature :(

Bad...


> https://lists.ubuntu.com/archives/kernel-team/2011-March/015173.html
> 
> "Well, there is an alternative for those folks that _are_ dependent on 
> NET_NS:
> 
> sudo apt-get install linux-image-server-lts-backport-maverick"

With this workaround my LXC containers are working again! Thanks!

Nevertheless this IS an (Ubuntu) bug!
Both packages, lxc and linux-image belong to the same Ubuntu (LTS!) version
and should work together! I will file a bug-report at launchpad.net

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument

2011-06-01 Thread Ulli Horlacher
On Wed 2011-06-01 (11:17), Daniel Lezcano wrote:
> On 06/01/2011 10:45 AM, Ulli Horlacher wrote:
> 
> [ ... ]
> 
> > 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous 
> > mode
> > 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is 
> > not ready
> >
> > This is strange, because I have not configured vetheBqcj5.
> 
> It is configured by lxc automatically. No worries.

Ahh.. ok :-)


> Oh ! As far as I remember the ubuntu kernel team disabled the network 
> namespace in the kernel.
> 
> Can you check that with lxc-checkconfig ?

root@vms2:~# lxc-checkconfig
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-2.6.32-32-server
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: missing
^^
A!

Can I enable it at runtime or is it a compile time feature?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument

2011-06-01 Thread Ulli Horlacher

On Wed 2011-06-01 (10:30), Daniel Lezcano wrote:
> On 06/01/2011 10:25 AM, Ulli Horlacher wrote:
> 
> >
> > On Wed 2011-06-01 (10:18), Daniel Lezcano wrote:
> >
> >>> root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty
> >>> lxc-start 1306913218.901 ERRORlxc_namespace - failed to 
> >>> clone(0x6c02): Invalid argument
> >>
> >> Can you show the content of the /cgroup root directory please ?
> >
> 
> Any message in /var/log/messages ?

2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode
2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not 
ready

This is strange, because I have not configured vetheBqcj5.

I use:

root@vms2:/var/log# grep network /lxc/bunny.cfg
lxc.network.type = veth
lxc.network.link = br8
lxc.network.name = eth0

With:

root@vms2:/var/log# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
up ifconfig eth0 up

auto br0
iface br0 inet static
address 129.69.1.68
netmask 255.255.255.0
gateway 129.69.1.254
bridge_ports eth0
bridge_stp off
bridge_maxwait 5
post-up /usr/sbin/brctl setfd br0 0

# VLAN8
auto eth2
iface eth2 inet manual
up ifconfig eth2 up

auto vlan8
iface vlan8 inet manual
vlan_raw_device eth2
up ifconfig vlan8 up

auto br8
iface br8 inet manual
bridge_ports vlan8
bridge_maxwait 5
bridge_stp off
post-up /usr/sbin/brctl setfd br8 0


root@vms2:/var/log# ifconfig
br0   Link encap:Ethernet  HWaddr 00:23:ae:6c:4f:cd  
  inet addr:129.69.1.68  Bcast:129.69.1.255  Mask:255.255.255.0
  inet6 addr: fe80::223:aeff:fe6c:4fcd/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:140360 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4288 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:10991449 (10.9 MB)  TX bytes:631122 (631.1 KB)

br8   Link encap:Ethernet  HWaddr 00:e0:52:b7:37:fe  
  inet6 addr: fe80::2e0:52ff:feb7:37fe/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:54832 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:2654293 (2.6 MB)  TX bytes:468 (468.0 B)

eth0  Link encap:Ethernet  HWaddr 00:23:ae:6c:4f:cd  
  inet6 addr: fe80::223:aeff:fe6c:4fcd/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:154863 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4300 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:30994037 (30.9 MB)  TX bytes:632223 (632.2 KB)
  Memory:fe9e-fea0 

eth2  Link encap:Ethernet  HWaddr 00:e0:52:b7:37:fe  
  inet6 addr: fe80::2e0:52ff:feb7:37fe/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:55625 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:3513414 (3.5 MB)  TX bytes:936 (936.0 B)
  Interrupt:18 Base address:0x8f00 

loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:152 errors:0 dropped:0 overruns:0 frame:0
  TX packets:152 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:12394 (12.3 KB)  TX bytes:12394 (12.3 KB)

veth80Eh90 Link encap:Ethernet  HWaddr 3a:ce:cf:67:68:55  
  UP BROADCAST PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

veth9NBAU3 Link encap:Ethernet  HWaddr 2a:00:8d:c1:1f:98  
  UP BROADCAST PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vethO3dr66 Link encap:Ethernet  HWaddr 62:05:7b:53:4d:07  
  UP BROADCAST PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vethORKGHz Link encap:Ethernet  HWaddr 0e:b6:91:af:d2:9d  
  UP BROADCAST PROMISC 

Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument

2011-06-01 Thread Ulli Horlacher

On Wed 2011-06-01 (10:18), Daniel Lezcano wrote:

> > root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty
> >lxc-start 1306913218.901 ERRORlxc_namespace - failed to 
> > clone(0x6c02): Invalid argument
> 
> Can you show the content of the /cgroup root directory please ?

root@vms2:/cgroup# ls -l
total 0
drwxr-xr-x 2 root root 0 2011-06-01 09:08 2141
drwxr-xr-x 2 root root 0 2011-06-01 09:19 3526
drwxr-xr-x 2 root root 0 2011-06-01 09:20 4330
drwxr-xr-x 2 root root 0 2011-06-01 09:20 4719
drwxr-xr-x 2 root root 0 2011-06-01 09:20 5493
-r--r--r-- 1 root root 0 2011-06-01 09:07 cgroup.procs
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpu.rt_period_us
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpu.rt_runtime_us
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpu.shares
-r--r--r-- 1 root root 0 2011-06-01 09:07 cpuacct.stat
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuacct.usage
-r--r--r-- 1 root root 0 2011-06-01 09:07 cpuacct.usage_percpu
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.cpu_exclusive
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.cpus
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.mem_exclusive
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.mem_hardwall
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.memory_migrate
-r--r--r-- 1 root root 0 2011-06-01 09:07 cpuset.memory_pressure
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.memory_pressure_enabled
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.memory_spread_page
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.memory_spread_slab
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.mems
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.sched_load_balance
-rw-r--r-- 1 root root 0 2011-06-01 09:07 cpuset.sched_relax_domain_level
--w--- 1 root root 0 2011-06-01 09:07 devices.allow
--w--- 1 root root 0 2011-06-01 09:07 devices.deny
-r--r--r-- 1 root root 0 2011-06-01 09:07 devices.list
drwxr-xr-x 4 root root 0 2011-06-01 09:08 libvirt
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.failcnt
--w--- 1 root root 0 2011-06-01 09:07 memory.force_empty
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.limit_in_bytes
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.max_usage_in_bytes
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.memsw.failcnt
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.memsw.limit_in_bytes
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.memsw.max_usage_in_bytes
-r--r--r-- 1 root root 0 2011-06-01 09:07 memory.memsw.usage_in_bytes
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.soft_limit_in_bytes
-r--r--r-- 1 root root 0 2011-06-01 09:07 memory.stat
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.swappiness
-r--r--r-- 1 root root 0 2011-06-01 09:07 memory.usage_in_bytes
-rw-r--r-- 1 root root 0 2011-06-01 09:07 memory.use_hierarchy
-rw-r--r-- 1 root root 0 2011-06-01 09:07 net_cls.classid
-rw-r--r-- 1 root root 0 2011-06-01 09:07 notify_on_release
-rw-r--r-- 1 root root 0 2011-06-01 09:07 release_agent
-rw-r--r-- 1 root root 0 2011-06-01 09:07 tasks

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument

2011-06-01 Thread Ulli Horlacher

After a minor kernel update lxc-start does not work any more:

root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty
  lxc-start 1306913218.901 ERRORlxc_namespace - failed to 
clone(0x6c02): Invalid argument
  lxc-start 1306913218.901 ERRORlxc_start - Invalid argument - failed 
to fork into a new namespace
  lxc-start 1306913218.901 ERRORlxc_start - failed to spawn 'bunny'
  lxc-start 1306913218.901 ERRORlxc_cgroup - No such file or directory 
- failed to remove cgroup '/cgroup/bunny'

root@vms2:/lxc# uname -a; lxc-version
Linux vms2 2.6.32-32-server #62-Ubuntu SMP Wed Apr 20 22:07:43 UTC 2011 x86_64 
GNU/Linux
lxc version: 0.7.4.1

root@vms2:/lxc# mount | grep cgroup
none on /cgroup type cgroup (rw)



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Howto detect the containers host

2011-05-26 Thread Ulli Horlacher
On Thu 2011-05-26 (09:34), Jäkel, Guido wrote:

> something related to the "Howto detect we're a LXC Container" is the
> question: "Howto detect from inside a container the name (or something
> equivalent) of the machine we're hosted on?"

My lxc meta-script creates /lxc/hostname inside the container at startup:

root@vms2:/lxc# lxc -b vmtest8
root@vms2:/lxc# lxc -x vmtest8 "uname -a; cat /lxc/hostname"
Linux vmtest8 2.6.32-31-server #61-Ubuntu SMP Fri Apr 8 19:44:42 UTC 2011 
x86_64 GNU/Linux
vms2

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Howto detect we are in LXC contener

2011-05-25 Thread Ulli Horlacher
On Thu 2011-05-26 (01:51), David Touzeau wrote:

> But i did not find any information inside the LXC contener in order to
> detect We are really in an LXC contener.

My trick is to mount cgroup into the container at /lxc/cgroup:

root@vms2:/lxc# grep cgroup flupp.fstab
/cgroup/flupp   /lxc/flupp/lxc/cgroup   none bind,ro 0 0

root@vms2:/lxc# mount | grep cgroup
none on /cgroup type cgroup (rw)

root@vms2:/lxc# lxc-console -n flupp
Type  to exit the console
root@flupp:~# mount | grep cgroup
none on /lxc/cgroup type cgroup 
(ro,relatime,net_cls,freezer,devices,memory,cpuacct,cpu,ns,cpuset)

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


  1   2   >