>-Original Message-
>From: wang yao [mailto:yaowang2...@gmail.com]
>Sent: Monday, November 18, 2013 5:09 AM
>To: Jäkel, Guido
>Cc: lxc-users@lists.sourceforge.net
>Subject: Re: [Lxc-users] Bonding inside LXC container
>
>Hi Jake,
>
>First of all, thank you for yo
Dear Yao,
as I understand, you want to bound two physical interfaces of the host hardware
to and use the bound inside a container.
eth0--[phys]--eth0--+--bound0
eth1--[phys]--eth1--/
Because no other -- neither host nor another container -- may use one of NICs
in addition, I w
me at an appropriate low value?
* Is the Host connected to a Switched Network? What did you observe here with
respect to the used MACs / IPs?
Greetings
Guido
>-Original Message-
>From: Andreas Laut [mailto:andreas.l...@spark5.de]
>Sent: Friday, October 11, 2013 10:41 AM
Dear Andreas,
please substantiate your term "start a lxc with multiple IPs" and the line "If
we are using only one IP for LXC, all is fine": What kind of network setup do
you use, is it e.g. a bridge on the lxc host and veth's on the containers?
A guess might be that you have a MAC address cla
Dear Kaj,
You step into a non-trivial trap. It will work either if your mount path inside
the container isn't 'mnt' or if you use lxc.pivotdir to define it to
something other than it default 'mnt'. To get rid of this problem, I'm using an
argument like '-s lxc.pivotdir=$CONTAINER' in my star
Dear Andreas,
In spite of this should be possible; from an abstract viewpoint it's better to
mount the NFS source on the host and propagate it via a bind mount to the
container. Particularly if you want to use this nfs source in more than one
container on this host.
Independent from that I wan
>Would injecting tcp rst really be necessary? In my test, doing "ip link del"
>on the host side of the interface ALWAYS succeed, no matter
>what the state the guest container's interface is.
>
>Serge, do you have the particular commit ids for "lxc.network.script.down"
>support? Backporting that w
>Quoting Jäkel, Guido (g.jae...@dnb.de):
>> Hi,
>>
>> I want to contribute an observation while playing around with my "empty
>> plain vanilla" container template: The test cycle is to start it,
>>open an ssh terminal session to it, leave it idle and re
Hi,
I want to contribute an observation while playing around with my "empty plain
vanilla" container template: The test cyclce is to start it, open an ssh
terminal session to it, leave it idle and regular shut down the container.
Now, if the containers eth0 is brought down by the shutdown, afte
Hi Serge,
>> to assist to avoid such problems i would propose to introduce macro
>> expansion (of the own tags but also by incorporating the
>environment variables) into the configuration argument parser and to provide
>some useful basics like the container name. Then one may
>use e.g.
>>
>>
Dear Serge,
to assist to avoid such problems i would propose to introduce macro expansion
(of the own tags but also by incorporating the environment variables) into the
configuration argument parser and to provide some useful basics like the
container name. Then one may use e.g.
lxc.h
>yes and it does this. The point is that lxcbr0 is not tied to any
>physical nic. So the first container you start, however high the
>macaddr is, lxcbr0 takes its mac. If the next container gets a
>lower macaddr, lxcbr0's macaddr drops.
This lxcbr0 is special to Ubuntu, right? And if not to a p
Dear Hans,
>Setting it to the MAC of the outgoing NIC is that safe or can it cause any
>problems?
It is even mentioned on the page you told;
http://backreference.org/2010/07/28/linux-bridge-mac-addresses-and-dynamic-ports/
>The MACs of the veth's are automatically set by lxc so what do you mea
Dear Hans,
this is a FAQ here but -- as you already found -- not basically caused by LXC.
The software bridge will always choose the lowest MAC of the attached devices
or hold an explicit assigned (from the set of currently assigned devices) as
long as possible. In your case you either may set
> Ok, who wants to be co-administrator of the mailing list ?
Tamas and Mike
--
Try New Relic Now & We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service
that delivers
Dear David,
this will require to persist the current "power state" of a container by some
kind of marker. A tricky way is to mark some container-related file, e.g. to
("miss"-) use the sticky bit of the containers lxc configuration file or to put
some marker file into the containers rootfs.
T
>TBH, I prefer the icon on the right, with boxes inside the monitor.
+1
Or what's about something with a container -- like
http://serverservice.sytes.net/wp-content/uploads/2012/06/lxc11.png
--
Everyone hates slow web
... and if you don't like to deal with changing spanning trees or
broad/multicast storms I strongly recommend to let only *one* do any routing
for all - for the lxc host and for all other machines in the network. Of
course, this one is the (core) router.
Guido
>-Original Message-
>From
Dear Mike,
Don't put an IP on the second (or further) bridges. Think about this bridges
configuration slot as an additional virtual interface card to connect your
hosts IP stack with this network. Said that, you will not be surprised that you
got two network interface devices and two default ro
by rklogd)
[...]
Sincerely
Guido
>-Original Message-
>From: Miroslav Lednicky [mailto:miroslav.ledni...@fnusa.cz]
>Sent: Thursday, January 24, 2013 11:11 AM
>To: Jäkel, Guido
>Cc: 'lxc-users@lists.sourceforge.net'
>Subject: Re: [Lxc-users] Syslog
>
>
Dear Benoit,
>Serge Hallyn suggested that 7b35f3d should fix my problem.
I noticed that.
>Thanks for the tip. a careful analysis of netstat does not lead to think I
>have remaining container connections.
I'm not using physical interfaces but instead of the default (veth and a number
of unkow
Dear Miroslav,
please ensure that the syslog deamon within all containers don't log the kernel
logfile source. If you "drain" this source by more than one syslog process, the
log messages will spread over the different syslog files.
If you state what concrete syslog deamon you'll use, I may hav
Dear Benoit,
Does the container put down the interfaces on shutdown? Please check (e.g. with
netstat) on host, if there are pending connections after shutdown of the
containers.
regards
Guido
>-Original Message-
>From: Benoit Lourdelet [mailto:blour...@juniper.net]
>Sent: Tuesday, Ja
>On the other hand, I *do* also feel that any services on the containers
>ought to be robust to unavailability, so that startup order should not
>matter.
Dear Serge,
yes - it's Xmas time, bells are ringing and all is warm and bright. ;)
Unfortunately, it matters to the greater part of software.
Hi all,
here my 5ct on auto start and start order: Because i'm using a farm of LXC
hosts where my containers may be spread over, i also need to persist the
"preferred host" of a container. This is currently stored in a separate
configuration file. Because this information should be easy accessi
>(1) I'm not sure you can do nfs-mount inside an lxc container
Yes, you can for the simplest solution.
But also, you can mount it on the host and propagate it (or any subtree, e.g.
for a concrete container) via an bind-mount to the container. If you have a lot
of containers, this will reduce th
Dear Dan,
As a workaround you may use the following perl script written by Ullrich
Horlacher. It also demonstrates the basic idea where to get a containers uptime
from. Here he use a well known file, but I think one may also use the
information related to the containers init process.
>So what happens with the container's when the Host OS gets an upgrade that
>includes a new kernel? Are the containers stil
>reachable, runable, etc? I guess what I'm asking is what happens?
Dear Brian,
a new kernel will be not become used until you boot the host. From that, after
an os upg
>perhaps just using tempnam suffices.
Or the process id? To use something unique, but still related ...
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landsca
Dear Chris,
>I think many of us have been caught out by this feature.
No need to get this number rising, right? ;)
>I now set all my config files to use /mnt/.lxc/NAME as the lxc.pivotdir entry
>for a container named NAME.
Do you choose the NAME postfix because in addition there's a possib
Dear developers,
I want to propose to change the default value of the temporal lxc pivot
directory from 'mnt' to '.lxc-mnt' or something "unusual" like that:
Right now, It takes me about an hour to trace down why I can successfully bind
mount some resource from the host to the container to
>>> I have a set up where there are multiple short lived containers (sharing the
>>> same IP address) in a host.
>
>>Why? Don't do that.
>
>I agree...what is your goal?
As others said, this is very "free-spirited" and typically only used in a high
availability cluster setup or other failover scen
>I know this is digression but I wondered if you could expand on this?
>
>Perhaps if I explained our use case and tell me if I'm doing the right thing?
>
>1. We create a new container
>2. We want to bootstrap it with a puppet script (apt-get install puppet &&
>puppet apply script.pp)
>
>We
>>Executable name:
>>I would prefer several almost identical actions to be implemented in one
>>program with options instead of several almost identical programs. So I
>>say lxc-shutdown -r than lxc-reboot. But I have no problem with
>>lxc-shutdown doing -r based on argv0 as well as getopts. Everyo
>Executable name:
>I would prefer several almost identical actions to be implemented in one
>program with options instead of several almost identical programs. So I
>say lxc-shutdown -r than lxc-reboot. But I have no problem with
>lxc-shutdown doing -r based on argv0 as well as getopts. Everyone c
Dear Fajar,
i just googled http://www.makelinux.net/man/7/P/power-status-changed .
There's written:
This event is not handled in the default Upstart configuration.
For control-alt-delete, the corresponding sentence states:
In the default Upstart configuration handling of
>After some experiments, upstart ignores SIGPWR, but still listens to
>SIGINT, and killing the process from the host works. So modifying the
>containter's control-alt-delete.conf to run "shutdown -h" instead of
>"shutdown -r" can let the host tell the guest to shutdown cleanly.
Dear Fajar,
becaus
>Can the host send a signal to the init's container? If yes, sysvinit
>responds to SIGINT. Does upstart behave the same (e.g. process
>control-alt-delete.conf when the signal is received)? It's set to
>reboot by default, but perhaps there's some other signal than we can
>use for shutdown?
SysVIni
Dear Arun,
You may also use a DHCP environment to setup the containers network IP,
routing, DNS-Servers etc. This approach will ease any changes of the network
infrastructure and will help you to make your templates more generic. For that,
you have the to assign a fix MAC address to the contain
Dear Patrick,
As I understand /dev/null isn't writable in your container. That's definitely a
wrong configuration.
Please check, that there is a real device node for /dev/null (and others) in
your container and you have it (and others) in the lxc device access control
list (lxc.cgroup.device
Dear Michael,
>I always hate replying to my own posts but I have stumbled onto some
>interesting clarification as I've continued to play with this...
>
>Below in-line.
> [...]
Again a well-done investigation. For everyone who don't have the time to
carefully read this threads, i want to sum the
Dear Nishant,
why do you not to use DHCP with a static configurations for this hosts?
If you realy don't rely on DHCP, you may use something like
lxc-start ... -n $CONTAINERNAME -s
lxc.network.ipv4=193.163.195.${CONTAINERNAME#container}
for the containerN startup.
Guido
>-Origin
>Hi all,
>
>I am really very happy about the goal to get a virtualization solution
>mainline, however, there a quite a few things I really hate
>about LXC right now, and this is one:
Dear Christian,
because i'm using Gentoo too, I'll try to support you by direct mail
communication.
Guido
---
>> 4. Which signal? SIGINT? SIGPWR? Both?
>
>Does only work for init based systems, not for upstart, like Ubuntu!
Dear Derek,
Sending a SIGINT to init will invoke the alsctrldel entry of the /etc/inittab
. A SIGPWR will (in absence of /etc/powerfail) call the powerfail entry. In a
common s
>Looks like the cheap and easy to get OUI is 36 bits long, leaving only
>12 bits for the user.
>
>Is 4096 possible unique MAC's enough?
I appreciate the development to let LXC assign an "usable" random MAC with an
adequate prefix in the default case because this will fit for the most users
and
Dear Jun,
don't assign anything at IP level to eth0, just get it up). But assign the IP
...202 to the bridge.
You may imagine this as if the bridge (as layer 2 device) connects at this
level to the eth0. But your hosts layer 3 IP stack has to be attached to the
bridge in the same way as the n
> I think there is about 80% overlap between the two projects but
>enough differences to be interesting. I'll take a closer look at your
>script looking for ideas I may have missed, and I invite you to do the same.
@Derek: well-spoken.
@Daniel & Serge: Is there already something like a Wiki
>Problem solved.
>/dev/rtc is only used to read the time.
>To write the date and time the ioctl function settimeofday is used. To
>prevent this you have to drop the capability sys_time
Dear sfrazt,
Good job! May you figure out if there are "unwanted" side effects if one may
drop the sys_time c
Dear Daniel,
What about to add little hints to such error messages; something like
"Too many open files - failed to inotify_init. You may have to increase
the value of fs.inotify.max_user_instances"
And maybe this possible traps should be pointed out in the man page.
Another trap is
>Hi
>
>Going back through the list, I couldn't find whether this has been resolved.
>
>I had a similar problem today with a little over 40 containers:
>
># lxc-start -n gary
>lxc-start: Too many open files - failed to inotify_init
>lxc-start: failed to add utmp handler to mainloop
>lxc-start: mainl
Hi all,
after having a private discussion with Serge E. Hallyn and then inspired by
the posting of Matto Fransen on the thread "read only rootfs" I was able to
realize an entry from my wish list, which may be useful for others, too:
To have a (read-only) access limited to "it's"
>> is lxc-start threadsave, i.e. may a start up different containers in
>> parallel? Have I to apply a individual value for 'lxc.rootfs.mount',
>> e.g. by use of the process id or 'mktemp'. Or something else, more?
>
>Ah, you're mixing apples and oranges here. Starting up two containers
>in parall
Hi all,
is lxc-start threadsave, i.e. may a start up different containers in parallel?
Have I to apply a individual value for 'lxc.rootfs.mount', e.g. by use of the
process id or 'mktemp'. Or something else, more?
thanks
Guido
Is there a way to assign veth name(visible from the host) to be the same
each time the container boots ?
At the moment it is a random value like vethFFzyq2
>>>
>>>Yes there is:
>>
>> It's in the man page, but it's not written in bold letters ;)
>>
>man 5 lxc.conf
>I wonder why it is not
>>Is there a way to assign veth name(visible from the host) to be the same
>>each time the container boots ?
>>At the moment it is a random value like vethFFzyq2
>
>Yes there is:
It's in the man page, but it's not written in bold letters ;)
Dear Aurélien
>Restarting LXC containers after a panic, power-fail or everything else is not
>the concern of basic LXC, it related to
>your host init script or your HA stuff (guest could have been restarted
>somewhere else) or things like Ganeti, Openstack...
I fully agree. But by the lack of i
Ulli>My lxc meta-script creates /lxc/hostname inside the container at startup:
As a workaround my meta-scripts does something similar be able to re-start the
appropriate containers in case of a panic, powerfail or similar on the
supporting host. But IMHO it's in the concern of basic lxc and not
Hi all,
something related to the "Howto detect we're a LXC Container" is the question:
"Howto detect from inside a container the name (or something equivalent) of the
machine we're hosted on?" This might be of interest for administration level
scripts on setups like the one 'm going to use: It'
Hi all,
i'm going to use LXC as a lightweight instrument to partition a modern equipped
blade server center into discrete, handy units to run a set of business
applications in it. Because it's a "friendly environment", I don't have to
focus on jail security. But I want to use the features of th
>Any hints?
Dear Arkaitz,
take a look to the switch and the spanning tree settings for the port. On Cisco
for instance, there will be a notable connection lag on topologie changes if a
link isn't configured to use a certain "fast" option.
Guido
---
Dear Gus,
> > brctl show
> bridge name bridge id STP enabled interfaces
> br0 8000.00183704c188 no eth0
> vethNFweOZ
> vethU0zyYA
Why STP is
Hi all,
i want to agree to Stuart statement:
> Perhaps we should be asking first, should an ncurses control panel be part of
> LXC or a separate project?
To my opinion and in comparison to similar projects, there should be a clear
separation between any higher level (GUI) tool and the basic le
>But I do suggest don't use the same thing xen or vmware or openvz or
>hyper-v etc uses, wherever there is any known consistent usage.
Dear Brian,
i complete agree your argument. But this simply leads to the conclusion that
someone(tm) have to start efforts to register a MAC-range for LXC as it
>Hi,
>i have tried to find an rfc about this but have failed, instead, the
>only (serious/credible) documentation i could find was
>http://wiki.xen.org/xenwiki/XenNetworking#head-d5446face7e308f577e5aee1c72cf9d156903722
> ,
>so i updated the script accordingly, here is the updated patch.
>again,
D
Dear John,
> - generate random mac address for the guest so it gets always the same
> lease from a dhcp server
You suggest doing this by
macaddr=$(echo -n 00; hexdump -n 5 -v -e '/1 ":%02X"' /dev/urandom)
I think this is a "little bit to random". The german Wikipedia tells at
http
Hi all,
I found, that the *hosts* devpts-filesystem is allowed to remounted read-only
by the shutdown scripts of the client in a container
Before shutdown of the client:
host # grep devpts /proc/mounts
devpts /dev/pts devpts
rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=00
>Hi,
>
>i was facing a similar problem with ipv6 with a 2.6.36 kernel.
What's the similarity?
>Bug was corrected in the 2.6.36-rc4.
>But, maybe it's not the same?
>
>What's the kernel version?
2.6.37-gentoo
--
Free Soft
Hi all.
I'm just started this week to explore LXC (0.7.3) on Gentoo as a Host. Solving
the first puzzles with a complex network setup and with inspirations from the
lxc-gentoo script, yesterday I got my first two Gentoo Containers to proper
boot to the login.
But at the first attempt of a grac
68 matches
Mail list logo