On 04/27/2017 12:49 AM, Ganesh Sathyanarayanan wrote:
Hi All,
This is similar to a post by a John sometime in Aug-2010. He was
trying to run Xorg in an lxc which required access to /dev/mem. Am
trying to run a custom/proprietary application that needs the same
(access to /dev/mem).
I have a
On 04/05/2017 06:45 AM, Serge E. Hallyn wrote:
I correct in assuming LXC *does* provide a means to enable RT
The kernel has hardcoded checks (which are not namespaced) that
if you are not (global) root, you cannot set or change the rt
policy. I suspect there is a way that could be safely relax
On 03/31/2017 10:16 AM, Peter Steele wrote:
As you can see, the sched_setscheduler() call fails with an EPERM
error. This same app runs fine on the host.
Ultimately I expect this app to fail when run under my container since
I have not given the container any real time bandwidth. I had hoped
On 03/28/2017 07:55 AM, Serge E. Hallyn wrote:
Is this using a user namespace or not?
I am not using a user namespace. This in intended to be a privileged
container with everything running as root. Although I am planning on
using a custom CentOS template I've created, I can reproduce the prob
We have a need to create real time threads in some of our processes
and I've been unable to configure an LXC container to support this.
One reference I came across was to set a container's real time
bandwidth via the lxc.cgroup.cpu.rt_runtime_us parameter in its config
file:
lxc.utsname = t
0.6
under CentOS 7.2. The container is being created using a custom CentOS
7.2 image.
Thanks for the help.
Peter Steele
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
On 01/12/2016 07:03 PM, Fajar A. Nugraha wrote:
On Tue, Jan 12, 2016 at 9:29 PM, Peter Steele wrote:
On 01/12/2016 05:59 AM, Fajar A. Nugraha wrote:
On Tue, Jan 12, 2016 at 8:40 PM, Peter Steele wrote:
I should have added that I have no issue running our software on a single
EC2 instance
The best alternative is probably libvirt lxc. That's what we use. We had
started trying lxc/lxd under CentOS 7 and hit various issues. The
project has been tabled for now and we're sticking with libvirt lxc...
On 04/20/2016 10:05 AM, Saint Michael wrote:
That is what I am afraid of.
On Wed, A
On 01/28/2016 03:25 PM, Peter Steele wrote:
As I've explained in this mailing list before, I create my own custom
CentOS template that has some history, being initially used as a
template for KVM based virtual machines, then OpenVZ based containers,
then libvirt-lxc containers, and now fi
As I've explained in this mailing list before, I create my own custom
CentOS template that has some history, being initially used as a
template for KVM based virtual machines, then OpenVZ based containers,
then libvirt-lxc containers, and now finally we're tackling LXC. One
issue I've noted is
On 01/12/2016 04:41 PM, Mike Wright wrote:
Please keep it on list. I'd like to see the solution unfold. I've had
a bit of trouble following various Flockport write-ups and every
additional piece of info helps me better understand the vagaries of
advanced networking.
I'm going dark for a bit t
On 01/12/2016 01:34 PM, brian mullan wrote:
All I did was install/configure PeerVPN on say server1 and server2 and
make sure they
connected.
While logged into each of your servers you should then be able to ping
10.x.x.x IP address of the other PeerVPN member server(s) ... assuming
you are us
On 01/12/2016 08:43 AM, Peter Steele wrote:
On 01/12/2016 06:35 AM, brian mullan wrote:
Peter
On AWS unless you are using VPC I don't think
you can use secondary addresses because AWS won't
route any of that traffic. Also with your
addresses routing would be affected by the
spl
On 01/12/2016 06:35 AM, brian mullan wrote:
Peter
On AWS unless you are using VPC I don't think you can use secondary
addresses because AWS won't route any of that traffic. Also with your
addresses routing would be affected by the split-horizon problem with
the same network on 2 sides.
You
On 01/12/2016 05:59 AM, Fajar A. Nugraha wrote:
On Tue, Jan 12, 2016 at 8:40 PM, Peter Steele wrote:
I should have added that I have no issue running our software on a single
EC2 instance with containers running on that instance. We can assign
multiple IPs to the instance itself, as well as to
ly to talk to AWS directly.
Thanks.
Peter
On 01/11/2016 06:55 PM, Fajar A. Nugraha wrote:
On Tue, Jan 12, 2016 at 6:31 AM, Peter Steele wrote:
From what I've read, I understand that Amazon has implemented some
special/restricted behavior for the networking stack of EC2 instances. The
I first brought this issue up several weeks ago and have just got back
to the work where I originally ran into this problem. The scenario is
simple enough:
- Create two EC2 instances running CentOS 7.1
- Configure these instances to used bridged networking
- Create a LXC container running under
On 12/10/2015 06:13 AM, Peter Steele wrote:
On 12/09/2015 06:43 PM, Serge Hallyn wrote:
Ok, systemd does behave differently if it shouldn't be able
to create devices. If you add
lxc.cap.drop = mknod sys_rawio
to your configs does that help?
This did not help. I took it a step fu
On 12/09/2015 06:43 PM, Serge Hallyn wrote:
Ok, systemd does behave differently if it shouldn't be able
to create devices. If you add
lxc.cap.drop = mknod sys_rawio
to your configs does that help?
This did not help. I took it a step further and did an install with the
lxc capabilities c
On 12/09/2015 01:56 PM, Peter Steele wrote:
On 12/09/2015 11:46 AM, Peter Steele wrote:
On 12/09/2015 10:18 AM, Serge Hallyn wrote:
I suppose just looking at the 'capsh --print' output difference for the
bounding set between the custom containers spawned by lxc and
libvirt-lxc
On 12/09/2015 11:46 AM, Peter Steele wrote:
On 12/09/2015 10:18 AM, Serge Hallyn wrote:
I suppose just looking at the 'capsh --print' output difference for the
bounding set between the custom containers spawned by lxc and
libvirt-lxc could
be enlightening.
Here's the dif
On 12/09/2015 10:18 AM, Serge Hallyn wrote:
This is the kind of thing I'd expect when using cgmanager or lxcfs,
but not with straight lxc+cgfs. Can you show what /sys/fs/cgroup tree
and /proc/1/cgroup looks like in a working container?
As requested:
# ll /sys/fs/cgroup(top level only)
total 0
On 12/09/2015 09:43 AM, Serge Hallyn wrote:
And "the systemd errors" is the ssh-keygen ones only? Or is there
more?
Various services are being impacted, for example, I saw these errors in
a run yesterday:
Dec 7 13:52:00 pws-vm-00 systemd: Failed at step CGROUP spawning
/usr/bin/kmod: No suc
On 12/08/2015 08:36 PM, Serge Hallyn wrote:
What do you mean by "when the server comes up"? If you bring up the
server, let it set for 5 mins, then start them, they still fail?
What I meant here was that when my server boots, it launches our
management software, which in turns launches the cont
On 12/08/2015 02:21 PM, Peter Steele wrote:
In this case of course the containers are using the stock downloaded
CentOS 7 image instead of my custom image. I was unable to reproduce
the systemd error through multiple start/stop tests of my
containers. They always started up without any
On 12/08/2015 11:10 AM, Peter Steele wrote:
On 12/08/2015 08:00 AM, Serge Hallyn wrote:
Ok, can you change the launch command in the scripts to
lxc-start -n $containername -L /tmp/$containername.cout -l trace -o
/tmp/$containername.dout -- /sbin/init log_target=console
log_level=debug
The
On 12/08/2015 08:00 AM, Serge Hallyn wrote:
Ok, can you change the launch command in the scripts to
lxc-start -n $containername -L /tmp/$containername.cout -l trace -o
/tmp/$containername.dout -- /sbin/init log_target=console log_level=debug
The console output will go into the .cout file and l
On 12/07/2015 07:49 AM, Serge Hallyn wrote:
Quoting Peter Steele (pwste...@gmail.com):
I'm actually not (yet) running lxcfs. My understanding was that it
isn't absolutely required but it does offer several benefits. I'd
planned to tackle lxcfs after getting things running withou
On 12/04/2015 01:38 PM, Serge Hallyn wrote:
My guess is that the no such file or directory is talking about a
cgroup dir. what does /proc/1/cgroup in the container show? Make sure
to run the latest lxcfs on the host, as that's needed because
systemd moves itself to name=systemd:/init.scope cgrou
On 12/03/2015 08:42 PM, Fajar A. Nugraha wrote:
lxc.autodev = 1
That is not common.conf (though I'm not sure whether it matters)
I included this early on when I was encountering the funky udev issue.
it didn't help but I kept it in place, admittedly for no good reason.
lxc.kmsg = 0
Neither is
I'm seeing these messages on some of my containers during their initial
start-up:
systemd: Failed at step CGROUP spawning /usr/sbin/sshd-keygen: No such
file or directory
systemd: sshd-keygen.service: main process exited, code=exited,
status=219/CGROUP
systemd: Failed to start OpenSSH Server
On 12/03/2015 11:27 AM, Neil Greenwood wrote:
On 3 December 2015 17:10:29 GMT+00:00, Peter Steele wrote:
I can't really use the downloaded template for our rootfs, as I
explained earlier. We already have a process that generates a custom
centos tar ball with the specific set of packages
On 12/03/2015 07:25 AM, Fajar A. Nugraha wrote:
On Thu, Dec 3, 2015 at 9:27 PM, Peter Steele <mailto:pwste...@gmail.com>>wrote:
On 12/02/2015 08:47 PM, Fajar A. Nugraha wrote:
centos template -> download lost of packages (i.e. RPM) one by one
using yum, and then install i
On 12/02/2015 08:47 PM, Fajar A. Nugraha wrote:
On Thu, Dec 3, 2015 at 1:14 AM, Peter Steele <mailto:pwste...@gmail.com>> wrote:
On 12/02/2015 07:23 AM, Fajar A. Nugraha wrote:
On Wed, Dec 2, 2015 at 9:49 PM, Peter Steele mailto:pwste...@gmail.com>>wrote:
On 1
On 12/02/2015 03:01 PM, Stéphane Graber wrote:
On Wed, Dec 02, 2015 at 02:56:51PM -0800, Peter Steele wrote:
From the searches I've done this seems to be a known issue but I'm
not clear what the solution is. I'm using LXC 1.1.5 under CentOS 7.1
and created a container using
From the searches I've done this seems to be a known issue but I'm not
clear what the solution is. I'm using LXC 1.1.5 under CentOS 7.1 and
created a container using
# lxc-create -t centos -n test1
This completed without issues. I followed this up with a start command:
# lxc-start -n test1
lx
On 12/02/2015 10:42 AM, Peter Steele wrote:
On 12/02/2015 10:29 AM, Thomas Moschny wrote:
2015-12-02 19:14 GMT+01:00 Peter Steele :
I am using the version 1.0.7 RPMs that are available on EPEL. I
assume there
are no RPMs available for 1.1? We tend to use binary versions of the
third
party
On 12/02/2015 11:39 AM, Saint Michael wrote:
I don't explain myself.
You need an Ubuntu 14.04 server with nothing else running, but LXC.
100% of the real work gets done via Centos containers.It works
perfectly and it is rock solid.
The only thing on top is the latest available kernel
3.19.0-33
On 12/02/2015 10:38 AM, Saint Michael wrote:
In my unauthorized opinion, Ubuntu has a much sold LXC that the Red
Hat derivatives. That is why I run my apps in Fedora containers and my
LXC servers in Ubuntu, The Fedora management does not quite understand
that LXC is the only possible game, not
On 12/02/2015 10:29 AM, Thomas Moschny wrote:
2015-12-02 19:14 GMT+01:00 Peter Steele :
I am using the version 1.0.7 RPMs that are available on EPEL. I assume there
are no RPMs available for 1.1? We tend to use binary versions of the third
party packages we've included in our system but I
On 12/02/2015 07:23 AM, Fajar A. Nugraha wrote:
On Wed, Dec 2, 2015 at 9:49 PM, Peter Steele <mailto:pwste...@gmail.com>>wrote:
On 12/01/2015 08:25 PM, Fajar A. Nugraha wrote:
Is there a reason why you can't install a centos7 container using
the download template? It
On 12/02/2015 08:54 AM, Peter Steele wrote:
On 12/02/2015 08:09 AM, Saint Michael wrote:
I could not find on Google any mention of Red Hat killing LXC on
Libvirt. Care to elaborate?
Here's the first reference I came across a few months ago:
https://access.redhat.com/articles/1365153. Th
On 12/02/2015 08:09 AM, Saint Michael wrote:
I could not find on Google any mention of Red Hat killing LXC on
Libvirt. Care to elaborate?
Here's the first reference I came across a few months ago:
https://access.redhat.com/articles/1365153. There's no date indicated
here so I really don't kn
On 12/01/2015 08:25 PM, Fajar A. Nugraha wrote:
Is there a reason why you can't install a centos7 container using the
download template? It would've been MUCH easier, and some of the
things you asked wouldn't even be an issue.
Well, there's a bit of history involved. Originally we were building
On 11/30/2015 06:38 PM, Serge Hallyn wrote:
Hi Peter,
my guess is that udev is starting because the container has
the capabilities to start. If you look at stock containers
created using the lxc templates, the tend to include files
like /usr/share/lxc/config/common.conf, which has
lxc.cap.drop
This message is a bit long and I apologize for that, although the bulk
is cut-and-paste output. I'm migrating our container project from
libvirt-lxc under CentOS 7.1 to LXC and I'm seeing some errors in
/var/log/messages that I don't see in libvirt-lxc. The LXC containers I
am creating are base
the dropped
packets virtually disappeared. Unfortunately, the original arp table
problem I reported in this thread reappeared, even though we're now
using the mainline 4.2 kernel. Apparently they fixed the bug for bonds
but the newer team feature is still susceptible.
Peter
On 09/11/20
name again for a new veth.
greetings
Guido
On 23.09.2015 03:24, Peter Steele wrote:
On 09/22/2015 08:08 AM, Guido Jäkel wrote:
* Do you use lxc.network.veth.pair to name the hosts side of the veth?
Yes. I rename the veth interfaces to match the names of the containers.
* Was the Container
On 09/22/2015 08:08 AM, Guido Jäkel wrote:
* Do you use lxc.network.veth.pair to name the hosts side of the veth?
Yes. I rename the veth interfaces to match the names of the containers.
* Was the Container up and running "just before" and you (re)start it within
less than 5min?
Yes. When th
On 09/21/2015 03:27 PM, Fajar A. Nugraha wrote:
I remembered something similar a while ago, in ubuntu precise host and
containers, with both lxc 1.0.x and lxc-1.x from ppa. At that time a
container's interface would mysteriously dissapear, including it's
host side veth pair. Only on one contain
On 09/21/2015 01:20 PM, Peter Steele wrote:
On 09/21/2015 08:32 AM, Serge Hallyn wrote:
In these cases does /sys/class/net/eth0 exist?
I'll try to reproduce the condition and check this...
I just checked this. This directory does not exist. There is only an
entry f
On 09/21/2015 08:32 AM, Serge Hallyn wrote:
In these cases does /sys/class/net/eth0 exist?
I'll try to reproduce the condition and check this...
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/
We sometimes hit an error where eth0 in a container does not come up,
leaving the container with only the "lo" device. The system messages in
the container lists the error
Sep 17 15:58:50 vm-00 network: Bringing up interface eth0: ERROR :
[/etc/sysconfig/network-scripts/ifup-eth] Device eth0
On 09/14/2015 11:02 PM, Fajar A. Nugraha wrote:
Assuming your problem is caused by bridging the veth interface,
there's an alternate networking setup with proxyarp + route that might
work. It doesn't use bridge, and only works for privileged containers.
I'll investigate how this could be setup
On 09/15/2015 01:29 AM, Andrey Repin wrote:
2b. If you absolutely want to communicate with containers from host via
network, you will need a similarly set up interface on the host.
This is a little complicated without a helper script, but still doable:
We do need to be able to communicate with
On 09/13/2015 06:19 AM, Fajar A. Nugraha wrote:
Had you use Ubuntu you could probably say something like "kernel 4.2
should be released as linux-generic-lts-wily for ubuntu 14.04 in about
two months, and we can switch to distro-supported package then"
Had you use Oracle Linux with support subscr
On 09/11/2015 12:08 PM, Andrey Repin wrote:
So, have you tried getting rid of the bridge in first place?
The problem isn't the bridge per se, it's the bond mode. If I use
active-backup the veth->bridge->bond path from container to container
works as expected. Bond modes using load balancing o
On 09/10/2015 11:14 PM, Guido Jäkel wrote:
* Is even LXC not needed to reproduce the issue but just a bridge on a bound
and some other devices?
I have not been able to reproduce the problem except between containers
running on different hosts. Behavior is the same for lxc and libvirt-lxc.
* Di
On 09/10/2015 07:57 PM, Fajar A. Nugraha wrote:
If I read the bug report correctly, it's not moved to lxc. Rather, KVM
is not required to reproduce it, using lxc is enough to trigger the
bug. Using KVM will of course still trigger the bug as well.
Sorry, I didn't mean the bug was moved to lxc,
addressed in recent kernels.
Peter
On 09/10/2015 02:52 PM, Peter Steele wrote:
I've configured a standard CentOS bridge/bond, the exact same setup
that I use for creating VMs. VMs on different hosts communicate
through the bridge without issues. Containers that use the identical
bridge
I've configured a standard CentOS bridge/bond, the exact same setup that
I use for creating VMs. VMs on different hosts communicate through the
bridge without issues. Containers that use the identical bridge however
cannot reliably connect to containers on different hosts. We've
determined that
he business case.
greetings
Guido
On 07.09.2015 20:49, Peter Steele wrote:
We're having issues with networking connections in our containers when the host
is configured with bonded interfaces. When we configure these same servers to
run with VMs, everything works fine, but when we
On 09/07/2015 11:49 AM, Peter Steele wrote:
We're having issues with networking connections in our containers when
the host is configured with bonded interfaces. When we configure these
same servers to run with VMs, everything works fine, but when we swap
out the VMs for equival
We're having issues with networking connections in our containers when
the host is configured with bonded interfaces. When we configure these
same servers to run with VMs, everything works fine, but when we swap
out the VMs for equivalently configured containers, we get all kinds of
network con
On 09/07/2015 07:56 AM, Serge Hallyn wrote:
You shouldn't need to do anything other than make sure that
sys_nice isn't in any lxc.cap.drop line.
You can use 'capsh --print' to verify that you have the cap.
Is this the config you passed to lxc-create, or the full final
configuration?
This is the
On 09/05/2015 10:35 AM, Peter Steele wrote:
I have a privileged container that runs ctdb and needs to have real
time scheduling enabled. The error reported by ctdb is:
Sep 05 10:27:05 pws-01-vm-05 systemd[1]: Starting CTDB...
Sep 05 10:27:06 pws-01-vm-05 ctdbd[1598]: CTDB starting on node
Sep
On 09/06/2015 09:52 AM, Guido Jäkel wrote:
Dear Peter,
don't use a MAC prefix that is lower than that of the upstream device of the
bridge the containers are attached: The Linux software bridge will use the
lowest MAC of it's attached devices as the MAC of the outgoing packets.
Therefore, you
On 09/06/2015 09:52 AM, Guido Jäkel wrote:
Dear Peter,
don't use a MAC prefix that is lower than that of the upstream device of the
bridge the containers are attached: The Linux software bridge will use the
lowest MAC of it's attached devices as the MAC of the outgoing packets.
Therefore, you
On 09/06/2015 09:52 AM, Guido Jäkel wrote:
Dear Peter,
don't use a MAC prefix that is lower than that of the upstream device of the
bridge the containers are attached: The Linux software bridge will use the
lowest MAC of it's attached devices as the MAC of the outgoing packets.
Therefore, you
On 09/05/2015 05:45 PM, Stéphane Graber wrote:
lxc.cgroup.cpuset.cpus = 0-3
So, if I had enough CPUs, I could set one container to have, e,g,
lxc.cgroup.cpuset.cpus = 0-3
and others to have
lxc.cgroup.cpuset.cpus = 4-5
lxc.cgroup.cpuset.cpus = 6-7
and so on? And if I didn't then, every is j
Our application needs to limit the number of cores a container can use.
With libvirt-lxc I use the command "virsh setvpus" to set the number of
cores a container can use. With this command you only have to specify
the number of cores assigned to the container, not a specific core
number. I can
I have a privileged container that runs ctdb and needs to have real time
scheduling enabled. The error reported by ctdb is:
Sep 05 10:27:05 pws-01-vm-05 systemd[1]: Starting CTDB...
Sep 05 10:27:06 pws-01-vm-05 ctdbd[1598]: CTDB starting on node
Sep 05 10:27:06 pws-01-vm-05 ctdbd[1599]: Starting
On 09/01/2015 09:15 AM, Serge Hallyn wrote:
In that case that's exactly what templates were meant to do. So while
I'd still like to see lxc-device updated to support persistence, you
could do what you want by
1. creating a lxc.hook.autodev hook which creates the device you want
using mknod,
On 09/01/2015 08:36 AM, Andrey Repin wrote:
If your conf file is missing lxc.network.hwaddr, it was either
removed, or a container was not created using standard template. You
can add it manually with any suitable value.
You are correct, I am not using a standard template. The command I am
u
On 09/01/2015 07:25 AM, Serge Hallyn wrote:
FWIW there is a lxc-device command that will do the mknod for you,
but it won't be persistent (iirc). Support for making that
persistent would be welcome. I think that would come in three small
pieces:
. have src/lxc/lxc_device optionally save the co
On 09/01/2015 02:06 AM, Andrey Repin wrote:
Greetings, Peter Steele!
lxc.network.hwaddr = 00:xx:xx:xx:xx:xx
Do NOT do this.
If you want completely random private MAC's, start with 02:...
Ref: http://www.iana.org/assignments/ethernet-numbers/ethernet-numbers.xhtml
in my default.conf as
On 08/31/2015 10:03 PM, Serge Hallyn wrote:
Right - if you use lxc-create to create the config file, and your
initial lxc.conf (i.e. /etc/lxc/default.conf or whatever you pass
as CONF to lxc-create -f CONF) contains something like
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
then your container confi
On 08/31/2015 08:41 PM, Fajar A. Nugraha wrote:
Nope. lxc.cgroup allows you to create and access the block device, but
you still need to create the device node yourself.
Fair enough. Then I guess I'll use mknod...
___
lxc-users mailing list
lxc-users@
This is likely a newbie question but here goes...
I have some privileged containers that need access to certain block
devices available on their host. For example, I'd like /dev/sda3 to be
accessible from my container test. The major/minor values for this
device is:
# ll /dev/sda3
brw-rw
On 08/31/2015 06:32 AM, Serge Hallyn wrote:
Should show up in a line like
lxc.network.hwaddr = 00:16:3e:83:d1:8f
in the created container config.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo
On 08/30/2015 03:10 PM, Michael H. Warfield wrote:
I played with this a while back and found that you are severely
limited for the name length. Using the container name for that is,
sooner or later, going to overflow that limit and possibly generated
an error on container startup. I think it's
On 08/30/2015 04:37 PM, Andrey Repin wrote:
Please start from the beginning.
What network topology you want for your containers, and what specific features
you need from VETH interface, that other networking modes do not offer?
I'm using host bridging with lxc.network.type = veth. Everything is
I want to pick my own naming convention for the veth interfaces created
by LXC rather than using the auto-generated names. I tried adding the entry
lxc.network.veth.pair = veth0
in one of my privileged containers but it still get a random name of
'veth6HIE0A' instead of eth0. Am I mistaken as
Is there a way to determine the auto-generated MAC address that
lxc-create assigns to a container, apart from starting it and inspecting
the live container?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.or
On 08/30/2015 11:10 AM, Peter Steele wrote:
Clearly there is user error here on my part and I am not correctly
specifying how to configure LXC containers to use host bridging under
CentOS. I'll have to do some more digging.
I figured it out. I've been using commands similar to
On 08/29/2015 03:26 PM, Fajar A. Nugraha wrote:
It should be "tcpdump -n -i bond0 host 172.16.0.1" and "tcpdump -n -i
veth5BJDXU host 172.16.0.1"
Okay, I ran this test, plus a few others. This specific test generated
no icmp traffic on either bond0 or the veth interface. After starting
th
On 08/29/2015 07:29 AM, Mark Constable wrote:
On 29/08/15 23:54, Peter Steele wrote:
For example, I see references to the file /etc/network/interfaces. Is
this an
LXC thing or is this a standard file in Ubuntu networking?
It's a standard pre-systemd debian/ubuntu network config
On 08/29/2015 01:09 AM, Neil Greenwood wrote:
Hi Peter,
On 28 August 2015 23:11:51 BST, Peter Steele wrote:
Do you have a ifcfg-br0 in your LXC configuration? If the VMs can see each
other, I think most of the settings are correct apart from the bridge not being
connected to the host's
On 08/28/2015 02:08 PM, Serge Hallyn wrote:
Can you show the host and container network details and container
xml for your libvirt-lxc setup? If machines A and B are on the
same LAN, with containers on A, are you saying that B can ping
the containers on A?
Yes, in our libvirt-LXC setup, contai
We're currently using the CentOS libvirt-LXC tool set for creating and
managing containers under CentOS 7.1. This tool set is being deprecated
though so we plan to change our containers to run under the
linuxcontainers.org framework instead. For simplicity I'll refer to this
as simply LXC inste
90 matches
Mail list logo