Re: [Lxc-users] seeing a network pause when starting and stopping LXCs - how do I stop this ?

2011-12-12 Thread Michael H. Warfield
On Mon, 2011-12-12 at 08:43 +0100, Ulli Horlacher wrote: 
> On Sun 2011-12-11 (19:48), Derek Simkowiak wrote:
> 
> >  The problem is not related to the setfd option.  It is caused by 
> > the bridge acquired a new MAC address.  Libvirt already has a fix for 
> > this, and there is a patch in the works for the LXC tools.

> I am wonder why I do not have this problem: I really often start new
> containers and I do not have this patch, but no network freeze at all.

Look to your MAC addresses.  If the MAC address of the new container is
greater than the MAC address of the bridge, you will never see this
problem.  If it's less than the bridge MAC address, the MAC address will
change and, if you have any active network traffic, you WILL see this
problem.

The problem is old and related to a design decision wrt the bridges in
the Linux kernel.  I don't really agree with where the decision went but
I understand the problem and their concerns (IIRC).  The problem is the
bridge NEEDS a MAC address but has no "intrinsic" address.  So, they
started out by assigning the MAC address of the first device assigned to
the bridge.  That worked but had a problem.  The bridge is an
independent autonomous object within the kernel and must be independent
of the connected devices.  Sooo, what happens to the MAC address if you
delete that device from the bridge?  It can't stay the same.  Now, my
memory is very fuzzy after about a decade (and Google searches are NOT
helping my research) and anyone is welcome to correct me if I recall
incorrect but...  The arbitrary decision was made to always use the
lowest MAC address on the bridge.  The trouble is that they did that
when they added devices.  The problem really only occurs when you delete
devices and, even then, only when the MAC of the bridge matches the MAC
of the device being deleted and there is no other device with that MAC
address (multihomed SPARCs) on that bridge.  I would have chosen
differently but this was not my shot to call and I didn't have any
strong arguments to even insert into the discussion.  They chose to
always use the lowest MAC address.  The choice was arbitrary but a
choice had to be made.  No choice was a non-starter and all choices had
some downside to them.

There are actually TWO solutions to this problem.

The first one people have already glommed onto.  You have to set your
MAC address of your containers to be greater than the MAC address of the
bridge.  Problems there are that 1) we don't have our own vendor code
and 2) we have to make sure we're higher than ANY vendor code and 3) the
"locally administered bit" is NOT the highest order bit in the MAC
address (that really would have made it too easy to fix but that's
and even more ancient bad choice).  The good point is that we CAN auto
assign anything we WANT as long as that locally administered bit is set
and we can then set the vendor code as high as we want.  THAT works.
Use FF:FF:F2:: and you are gold.  Play with the remaining bits all you
want.

There is another.  When creating the bridge, assign it a MAC address
that's very LOW.  You can do that with ifconfig or ip.  It doesn't HAVE
to have a MAC address that matches ANY interface attached to it.  That's
a misconception.  It merely has to HAVE a MAC address.  The problem with
this solution is that "technically" that MAC address is then "locally
administered" so the locally administered bit SHOULD be set (00:02::)
but then anything of the vendor codes from 00:00:00:: through 00:00:ff::
would be less than that (00:01:: is the multicast bit).  Grrr...  But
it's only 256 old vendors.  Using 00:00:00:: for the vendor code works
and is sooo unlikely to conflict with Xerox Corporation (vendor code
00:00:00) or their Ethernet cards (I know I'll get HATE MAIL from
purists on THAT comment) that it's really NOT worth worrying about.

You do have to be forewarned that tinkering with MAC addresses at
runtime can be even MORE disasterous when dealing with IPv6 and autoconf
addresses.  Assigning a fixed MAC address to a bridge should only be
done at boot time and not changed after (one of the arguments against
that earlier decision).  Which points out another PITA associated with
that early decision regarding MAC addresses.  It makes a MESS out of
IPv6!

Regards,
Mike
-- 
Michael H. Warfield (AI4NB) | (770) 985-6132 |  m...@wittsend.com
   /\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
   NIC whois: MHW9  | An optimist believes we live in the best of all
 PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!


signature.asc
Description: This is a digitally signed message part
--
Systems Optimization Self Assessment
Improve efficiency and utilization of IT resources. Drive out cost and 
improve service delivery. Take 5 minutes to use this Systems Optimization 
Self Assessment. http://www.accelacomm.com/jaw/sdnl/114/5145005

Re: [Lxc-users] Container size minialisation

2011-12-12 Thread Zhu Yanhai
2011/12/13 Zhu Yanhai :
> Hi,
>
> 2011/12/13 Derek Simkowiak :
>>> I'm trying to compose a system, where lxc containers behave like virtual
>>> hosts for a web server.
>>
>>     It is possible to use shared, read-only mounts between the host and
>> containers.  However, you will need to carefully consider your security
>> requirements and maintenance procedures.
>>
>>     If you are using shared mounts, then upgrading software on one container
>> will update files in the other containers.  That could be a problem, because
>> software updates usually include a post-install script that is executed at
>> the end of an upgrade (like a postinst file in the .deb package).  These
>> scripts may restart a service, update a configuration file in /etc/, or even
>> prompt the user for some input.
>>
>>     So what will happen when you update the LAMP files on your shared mount,
>> but your LXC containers don't do a corresponding server restart, or config
>> migration?  Things will probably break at some point.
>>
>>     Also, there are several security risks to consider.  A shared /home/
>> directory would also (implicitly) share everyone's .ssh/authorized_keys
>> files (which will grant OpenSSH access to all your containers).  You would
>> also need to be sure that all SSL and SSH host certs are independently
>> managed.  Using a single certificate for many hosts is not secure.  Apache
>> and OpenSSH keep their certs under /etc/, but Tomcat does not iirc.
>>
>>     Also, UIDs and GIDs are shared on the filesystem, so a root user in any
>> container would be able to alter any file in any other container (unless
>> it's a read-only mount from an external fstab file, and the "sys_admin"
>> capability is dropped in your lxc.conf).  What's worse, if you have
>> different /etc/passwd or /etc/group files in the containers, then group id
>> "121" might be the group "admin" in one container, but something else
>> entirely in another container.  The shared filesystem only stores the
>> integer group ID, not the actual group membership or resulting sudo
>> permissions.
>>
>>     Because of these complications, I have decided to give each LXC
>> container its own, full filesystem.  Unfortunately that "wastes" a few
>> hundred megs of disk space for each container, because the files are mostly
>> redundant in /usr/, /var/, etc.  However, disk space is very cheap, and the
>> value of having a standalone container is more than worth it to me.
>>
>>
>>
>>> Has anyone any experience with this technique?
>>
>>     I include a sample configuration for a shared filesystem with my
>> container creation script.  It is disabled by default, but you can read
>> through the configuration to get an idea.  You can download it from here:
>>
>> http://derek.simkowiak.net/lxc-ubuntu-x/
>>
>>
>> Thanks,
>> Derek Simkowiak
>>
>>
>> On 12/12/2011 09:47 AM, István Király - LaKing wrote:
>>
>> Hi folks.
>>
>> I'm trying to compose a system, where lxc containers behave like virtual
>> hosts for a web server.
>>
>> As next step I would like to minimize container size. My question is, what
>> the best, most elegant and fail proof  technique for that?
>>
>> At this moment I'm thinking of a "master container" and "slave containers"
>> where the /usr folder for example in the slave containers is a mount from
>> the master container. That gives a significant size drop already, from 400
>> to 40 megabytes.
>>
>> I would like to keep the containers really minimal. 4 megabyte should be
>> small enough.
>>
>> Lets say only some important files in /etc 
>>
>> Has anyone any experience with this technique?
>>
>> Thank you for sharing.
>>
>> greetings,
>> István Király
>>
>>
>> lak...@d250.hu
>>
>> D250 Laboratories
>> www.D250.hu
>>
>> --
>> Learn Windows Azure Live!  Tuesday, Dec 13, 2011
>> Microsoft is holding a special Learn Windows Azure training event for
>> developers. It will provide a great way to learn Windows Azure and what it
>> provides. You can attend the event by watching it streamed LIVE online.
>> Learn more at http://p.sf.net/sfu/ms-windowsazure
>> ___
>> Lxc-users mailing list
>> Lxc-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/lxc-users
>>
>>
>>
>> --
>> Learn Windows Azure Live!  Tuesday, Dec 13, 2011
>> Microsoft is holding a special Learn Windows Azure training event for
>> developers. It will provide a great way to learn Windows Azure and what it
>> provides. You can attend the event by watching it streamed LIVE online.
>> Learn more at http://p.sf.net/sfu/ms-windowsazure
>> ___
>> Lxc-users mailing list
>> Lxc-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/lxc-users
>>
>
> It's not only an issue about disk space, it's about page cache cost.

Re: [Lxc-users] Container size minialisation

2011-12-12 Thread Zhu Yanhai
Hi,

2011/12/13 Derek Simkowiak :
>> I'm trying to compose a system, where lxc containers behave like virtual
>> hosts for a web server.
>
>     It is possible to use shared, read-only mounts between the host and
> containers.  However, you will need to carefully consider your security
> requirements and maintenance procedures.
>
>     If you are using shared mounts, then upgrading software on one container
> will update files in the other containers.  That could be a problem, because
> software updates usually include a post-install script that is executed at
> the end of an upgrade (like a postinst file in the .deb package).  These
> scripts may restart a service, update a configuration file in /etc/, or even
> prompt the user for some input.
>
>     So what will happen when you update the LAMP files on your shared mount,
> but your LXC containers don't do a corresponding server restart, or config
> migration?  Things will probably break at some point.
>
>     Also, there are several security risks to consider.  A shared /home/
> directory would also (implicitly) share everyone's .ssh/authorized_keys
> files (which will grant OpenSSH access to all your containers).  You would
> also need to be sure that all SSL and SSH host certs are independently
> managed.  Using a single certificate for many hosts is not secure.  Apache
> and OpenSSH keep their certs under /etc/, but Tomcat does not iirc.
>
>     Also, UIDs and GIDs are shared on the filesystem, so a root user in any
> container would be able to alter any file in any other container (unless
> it's a read-only mount from an external fstab file, and the "sys_admin"
> capability is dropped in your lxc.conf).  What's worse, if you have
> different /etc/passwd or /etc/group files in the containers, then group id
> "121" might be the group "admin" in one container, but something else
> entirely in another container.  The shared filesystem only stores the
> integer group ID, not the actual group membership or resulting sudo
> permissions.
>
>     Because of these complications, I have decided to give each LXC
> container its own, full filesystem.  Unfortunately that "wastes" a few
> hundred megs of disk space for each container, because the files are mostly
> redundant in /usr/, /var/, etc.  However, disk space is very cheap, and the
> value of having a standalone container is more than worth it to me.
>
>
>
>> Has anyone any experience with this technique?
>
>     I include a sample configuration for a shared filesystem with my
> container creation script.  It is disabled by default, but you can read
> through the configuration to get an idea.  You can download it from here:
>
> http://derek.simkowiak.net/lxc-ubuntu-x/
>
>
> Thanks,
> Derek Simkowiak
>
>
> On 12/12/2011 09:47 AM, István Király - LaKing wrote:
>
> Hi folks.
>
> I'm trying to compose a system, where lxc containers behave like virtual
> hosts for a web server.
>
> As next step I would like to minimize container size. My question is, what
> the best, most elegant and fail proof  technique for that?
>
> At this moment I'm thinking of a "master container" and "slave containers"
> where the /usr folder for example in the slave containers is a mount from
> the master container. That gives a significant size drop already, from 400
> to 40 megabytes.
>
> I would like to keep the containers really minimal. 4 megabyte should be
> small enough.
>
> Lets say only some important files in /etc 
>
> Has anyone any experience with this technique?
>
> Thank you for sharing.
>
> greetings,
> István Király
>
>
> lak...@d250.hu
>
> D250 Laboratories
> www.D250.hu
>
> --
> Learn Windows Azure Live!  Tuesday, Dec 13, 2011
> Microsoft is holding a special Learn Windows Azure training event for
> developers. It will provide a great way to learn Windows Azure and what it
> provides. You can attend the event by watching it streamed LIVE online.
> Learn more at http://p.sf.net/sfu/ms-windowsazure
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
>
>
> --
> Learn Windows Azure Live!  Tuesday, Dec 13, 2011
> Microsoft is holding a special Learn Windows Azure training event for
> developers. It will provide a great way to learn Windows Azure and what it
> provides. You can attend the event by watching it streamed LIVE online.
> Learn more at http://p.sf.net/sfu/ms-windowsazure
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>

It's not only an issue about disk space, it's about page cache cost.
If the under level files are shared, the OS see them as unique inodes,
so the OS allocate only one copy for each one inode. However if we
copy th

Re: [Lxc-users] cannot start any more any container?! (partially solved)

2011-12-12 Thread Jérôme Petazzoni
Hi,

(I'm sorry if this message is not properly attached to its original 
thread; I found it on a mailing list archive while investigating a very 
similar bug, and joined the mailing list afterwards!)

We are experiencing symptoms very similar to those described  by Ulli 
Horlacher:
- after a while, containers won't start anymore
- lxc-start then remains stuck in "uninterruptible sleep" state and is 
unkillable
- the only way we found so far to solve the problem is to reboot the machine

Daniel Lezcano said:

> The problem you are describing is not related to LXC but to the network
> namespace where a dangling reference in the kernel with ipv6 locks the
> network devices. When the kernel hits this bug, any process creating a
> network device or deleting one will be stuck in an uninterruptible state.
>
> If you are able to start a container with an ipv6 address
> (lxc.network.ipv6=xxx), stop it, and start it again 10 seconds later
> then that means the bug is solved in the kernel.

We do not have any reference to ipv6 in the container configuration files.
Does this mean that we should be immune to the bug?
Or is the ipv6 trick just a way to reproduce the bug with 100% accuracy?


> The key point is what Serge said, if you have this message in your console:
>
> "kernel: unregister_netdevice: waiting for ... to become free"
>
> then this is a kernel bug.

dmesg does not show that message, unfortunately.

> If you still have this problem with 2.6.38, please let me know,

... And we are running 2.6.38.

I think that the problem never appeared with lxc 0.7.4; it looks like it 
started to occur with 0.7.5 (but this is only happening randomly, so we 
can't be 100% sure).

Any advice or idea will be more than welcome!

Best regards,

--
Systems Optimization Self Assessment
Improve efficiency and utilization of IT resources. Drive out cost and 
improve service delivery. Take 5 minutes to use this Systems Optimization 
Self Assessment. http://www.accelacomm.com/jaw/sdnl/114/51450054/
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxc on MPC platform

2011-12-12 Thread Paulo Rodrigues
Hi,

I'm new in lxc and I have some doubts about it.

Is lxc platform dependent?
Could I use lxc in the MPC837x processor?

The support to checkpoint/resume commands will be work in this platform?

I'm need to build a High Availability system with MPC837x processor and 
I'm planning to use the lxc with checkpoint / resume commands. This 
could be implemented?

Thanks in advance.
Paulo

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container size minialisation

2011-12-12 Thread Derek Simkowiak
> /I'm trying to compose a system, where lxc containers behave like 
virtual hosts for a web server./


It is possible to use shared, read-only mounts between the host and 
containers.  However, you will need to carefully consider your security 
requirements and maintenance procedures.


If you are using shared mounts, then upgrading software on one 
container will update files in the other containers.  That could be a 
problem, because software updates usually include a post-install script 
that is executed at the end of an upgrade (like a postinst file in the 
.deb package).  These scripts may restart a service, update a 
configuration file in /etc/, or even prompt the user for some input.


So what will happen when you update the LAMP files on your shared 
mount, but your LXC containers don't do a corresponding server restart, 
or config migration?  Things will probably break at some point.


Also, there are several security risks to consider.  A shared 
/home/ directory would also (implicitly) share everyone's 
.ssh/authorized_keys files (which will grant OpenSSH access to all your 
containers).  You would also need to be sure that all SSL and SSH host 
certs are independently managed.  Using a single certificate for many 
hosts is not secure.  Apache and OpenSSH keep their certs under /etc/, 
but Tomcat does not iirc.


Also, UIDs and GIDs are shared on the filesystem, so a root user in 
any container would be able to alter any file in any other container 
(unless it's a read-only mount from an external fstab file, and the 
"sys_admin" capability is dropped in your lxc.conf).  What's worse, if 
you have different /etc/passwd or /etc/group files in the containers, 
then group id "121" might be the group "admin" in one container, but 
something else entirely in another container.  The shared filesystem 
only stores the integer group ID, not the actual group membership or 
resulting sudo permissions.


Because of these complications, I have decided to give each LXC 
container its own, full filesystem.  Unfortunately that "wastes" a few 
hundred megs of disk space for each container, because the files are 
mostly redundant in /usr/, /var/, etc.  However, disk space is very 
cheap, and the value of having a standalone container is more than worth 
it to me.



> /Has anyone any experience with this technique?/

I include a sample configuration for a shared filesystem with my 
container creation script.  It is disabled by default, but you can read 
through the configuration to get an idea.  You can download it from here:


http://derek.simkowiak.net/lxc-ubuntu-x/


Thanks,
Derek Simkowiak

On 12/12/2011 09:47 AM, István Király - LaKing wrote:

Hi folks.

I'm trying to compose a system, where lxc containers behave like virtual hosts 
for a web server.

As next step I would like to minimize container size. My question is, what the 
best, most elegant and fail proof  technique for that?

At this moment I'm thinking of a "master container" and "slave containers" 
where the /usr folder for example in the slave containers is a mount from the master container. 
That gives a significant size drop already, from 400 to 40 megabytes.

I would like to keep the containers really minimal. 4 megabyte should be small 
enough.

Lets say only some important files in /etc 

Has anyone any experience with this technique?
  
Thank you for sharing.


greetings,
István Király


lak...@d250.hu

D250 Laboratories
www.D250.hu

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for
developers. It will provide a great way to learn Windows Azure and what it
provides. You can attend the event by watching it streamed LIVE online.
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-destroy does not destroy cgroup

2011-12-12 Thread Serge Hallyn
Quoting Gordon Henderson (gor...@drogon.net):
> On Thu, 8 Dec 2011, Arie Skliarouk wrote:
> 
> > When I tried to restart the vserver, it did not came up. Long story short,
> > I found that lxc-destroy did not destroy the cgroup of the same name as the
> > server. The cgroup remains visible in the /sys/fs/cgroup/cpu/master
> > directory. The tasks file is empty though.
> 
> And just now, I've had the same thing happen - a container failed to 
> start and it left it's body in /cgroup - with empy tasks.

The patch I sent out on Friday should help handle that more gracefully -
it moves the cgroup out of the way so a new container can start.  You'll
need to clean the old one up by hand if you care to, though lxc could
easily provide a tool to clean it up (and move and analyze any tasks left
running in the cgroup, though I suspect in most cases there are none).

-serge

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-destroy does not destroy cgroup

2011-12-12 Thread Gordon Henderson
On Thu, 8 Dec 2011, Arie Skliarouk wrote:

> When I tried to restart the vserver, it did not came up. Long story short,
> I found that lxc-destroy did not destroy the cgroup of the same name as the
> server. The cgroup remains visible in the /sys/fs/cgroup/cpu/master
> directory. The tasks file is empty though.

And just now, I've had the same thing happen - a container failed to 
start and it left it's body in /cgroup - with empy tasks.

This is latest & greatest - kernel 3.1.4, lxc 0.7.5, Debian squeeze 
(kernel & lxc compiled)

It may well have been my own fault - trying to start a container whos 
disk image was NFS mounted and I got the error:

mount.nfs: an incorrect mount option was specified

and lxc-start hung. so I may be doing something bogus anyway, however...

(like e.g. trying to bind-mount /proc, /sys, /dev/pts, etc. into an nfs 
mounted directory?)

Gordon

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Container size minialisation

2011-12-12 Thread Gordon Henderson

On Mon, 12 Dec 2011, István Király - LaKing wrote:


Hi folks.

I'm trying to compose a system, where lxc containers behave like virtual 
hosts for a web server.


As next step I would like to minimize container size. My question is, 
what the best, most elegant and fail proof  technique for that?


At this moment I'm thinking of a "master container" and "slave 
containers" where the /usr folder for example in the slave containers is 
a mount from the master container. That gives a significant size drop 
already, from 400 to 40 megabytes.


I would like to keep the containers really minimal. 4 megabyte should be 
small enough.


Lets say only some important files in /etc 

Has anyone any experience with this technique?   Thank you for sharing.


I use something similar for my hosted PBXs (asterisk).

Each PBX container mounts a commonn set of directories from teh host for 
the asterisk installation - e.g. the asterisk binaries, libraries, modules 
and sounds. also a common set for apache. This is all in the fstab 
referenced from each containers config file, so they're all bind-mounted 
at container start time. (read only too)


One thing to beware of - you can't share everything like this - e.g. /usr 
- which I initially thought I could - well, maybe I could, but doing 
things like apt-get update in one container would potentially update files 
in /usr in all containers which might not be the best thing. You could 
probably do it with care though - I simply can't be bothered.


I have to say: I'm not really that bothered about disk space - it's not a 
big deal. I do it that way as it makes it easier to update asterisk over 
all the containers. My "LAMP" type containers don't do any of this.


Gordon--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] Container size minialisation

2011-12-12 Thread István Király - LaKing
Hi folks.

I'm trying to compose a system, where lxc containers behave like virtual hosts 
for a web server.

As next step I would like to minimize container size. My question is, what the 
best, most elegant and fail proof  technique for that?

At this moment I'm thinking of a "master container" and "slave containers" 
where the /usr folder for example in the slave containers is a mount from the 
master container. That gives a significant size drop already, from 400 to 40 
megabytes.

I would like to keep the containers really minimal. 4 megabyte should be small 
enough. 

Lets say only some important files in /etc 

Has anyone any experience with this technique?
 
Thank you for sharing.

greetings,
István Király


lak...@d250.hu

D250 Laboratories
www.D250.hu

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] How to start the network services so as to get the IP address using lxc-execute???

2011-12-12 Thread Greg Kurz
On Mon, 2011-12-12 at 11:54 +0530, nishant mungse wrote:
> Hi Geordy,
> 

Hi Nishant,

I removed Cc: to containers@ as your troubles are about using the lxc
userspace tool: lxc-users@ is THE place for seeking help.

> This script gives the IP address of running system, but what I want is
> to get the IP address of the containers that are not started using
> lxc-start, lxc-start will call /sbin/init to init all the system, but
> I want to use lxc-execute that will no init the system. 
> 

I still don't understand what you intend to do... All I can say is that
using lxc-execute to partially start a container (that's what you're
doing when you want do lxc-execute /etc/init.d/networking) is a total
nonsense. Sorry.

> 
> Hey Greg you said that, it is possible to get the IP address without
> starting the containers how can we do this
> 

Your containers don't get their IP addresses in a vacuum... Either the
addresses are statically configured is some distro specific file, either
they are assigned by an external service (DHCP most of the time).
In the first case, you can probably find the address by parsing the
appropriate file from your container's filesystem
(/etc/sysconfig/network-scripts/ifcfg-eth0 for example on redhat). In
the second case, it depends on the DHCP server setup... please see that
with your sysadmin.

> And one more question how to start the network services using
> lxc-execute
> 

As told before, nonsense. 

> 
> Please help me ASAP.
> 
> 

That's what several people on the list are trying to do... If you really
need help, stop asking about how to misuse lxc-execute and give some
hints about your network setup... Do your containers use static
addresses ? Do they rely on a DHCP server ? Are you sysadmin for the
DHCP server ?

Unless you provide more context, I'm afraid nobody will be able to help
you...

Cheers.

-- 
Gregory Kurz gk...@fr.ibm.com
Software Engineer @ IBM/Meiosys  http://www.ibm.com
Tel +33 (0)534 638 479   Fax +33 (0)561 400 420

"Anarchy is about taking complete responsibility for yourself."
Alan Moore.


--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] seeing a network pause when starting and stopping LXCs - how do I stop this?

2011-12-12 Thread Daniel Lezcano
On 12/12/2011 04:48 AM, Derek Simkowiak wrote:
> When there are only one bridge on the system or the bridges are not
> connected together, this option is pointless and we can set the delay to
> '0'. That makes the port to be enabled instantaneously, hence the
> container can access the network immediately after the start.
>
>
>  As previously posted, this is not what causes the network "freeze" 
> with LXC.
>
>  The problem is not related to the setfd option.  It is caused by 
> the bridge acquired a new MAC address.  Libvirt already has a fix for 
> this, and there is a patch in the works for the LXC tools.
>
>  See my post about this four days ago at this URL, which includes a 
> link to the patch and a link to a possible workaround:
>
> http://osdir.com/ml/lxc-chroot-linux-containers/2011-12/msg00029.html

Yes, I was aware of that. I was just explaining why disabling setfd was
useful.
I have queued the patch to set an higher mac address.

Thanks
  -- Daniel

> Thanks,
> Derek
>
> On 12/11/2011 02:21 PM, Daniel Lezcano wrote:
>> On 12/08/2011 09:25 AM, Ulli Horlacher wrote:
>>> On Thu 2011-12-08 (07:39), Daniel Lezcano wrote:
 On 12/08/2011 12:38 AM, Joseph Heck wrote:

> I've been seeing a pause in the whole networking stack when starting
> and stopping LXC - it seems to be somewhat intermittent, but happens
> reasonably consistently the first time I start up the LXC.
>
> I'm using ubuntu 11.10, which is using LXC 0.7.5
>
> I'm starting the container with lxc-start -d -n $CONTAINERNAME
 That could be the bridge configuration. Did you do 'brctl setfd br0 0' ?
>>> I have this in my /etc/network/interfaces (Ubuntu 10.04):
>>>
>>> auto br0
>>>  iface br0 inet static
>>>  address 129.69.1.227
>>>  netmask 255.255.255.0
>>>  gateway 129.69.1.254
>>>  bridge_ports eth0
>>>  bridge_stp off
>>>  bridge_maxwait 5
>>>  post-up /usr/sbin/brctl setfd br0 0
>>>
>>>
>>> I have never noticed a network freeze and I really often start/stop LXC
>>> containers. Does this "brctl setfd br0 0" prevent the freeze? I do not
>>> remember why I have added it :-}
>> The setfd delay is used when there are several bridge setup on the
>> system to detect if the packet are looping across the bridges and to
>> learn the spawning tree control. AFAIR, a packet is transmitted on the
>> new port and the bridge waits for  to see if the packet goes out
>> of the bridge and came back from another port. During this delay, the
>> port is not enabled.
>>
>> When there are only one bridge on the system or the bridges are not
>> connected together, this option is pointless and we can set the delay to
>> '0'. That makes the port to be enabled instantaneously, hence the
>> container can access the network immediately after the start.
>>
>>
>>
>> --
>> Learn Windows Azure Live!  Tuesday, Dec 13, 2011
>> Microsoft is holding a special Learn Windows Azure training event for
>> developers. It will provide a great way to learn Windows Azure and what it
>> provides. You can attend the event by watching it streamed LIVE online.
>> Learn more at http://p.sf.net/sfu/ms-windowsazure
>> ___
>> Lxc-users mailing list
>> Lxc-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
> --
> Learn Windows Azure Live!  Tuesday, Dec 13, 2011
> Microsoft is holding a special Learn Windows Azure training event for 
> developers. It will provide a great way to learn Windows Azure and what it 
> provides. You can attend the event by watching it streamed LIVE online.  
> Learn more at http://p.sf.net/sfu/ms-windowsazure
> ___
> Lxc-users mailing list
> Lxc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users
>


--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users