Re: [lxc-users] Experience with large number of LXC/LXD containers

2017-04-05 Thread Tomasz Chmielewski

On 2017-03-13 06:28, Benoit GEORGELIN - Association Web4all wrote:

Hi lxc-users ,

I would like to know if you have any experience with a large number of
LXC/LXD containers ?
In term of performance, stability and limitation .

I'm wondering for exemple, if having 100 containers behave the same of
having 1.000 or 10.000  with the same configuration to avoid to talk
about container usage.


I'm running LXD on several servers and I'm generally satisfied with it - 
performance, stability are fine. They are mostly <50 containers though.


I also have a LXD server which runs 100+ containers, which 
starts/stops/deletes dozens of containers daily and is used for 
automation. Approximately once every 1-2 months, "lxc stop" / "lxc 
restart" command will fail, which is a bit of stability concern for us.


The cause is unclear. In LXD log for the container, the only thing 
logged is:



lxc 20170301115514.738 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive 
response: Connection reset by peer.



When it starts to happen, it affects all containers - "lxc stop / lxc 
restart" will hang for any of the running containers. What's 
interesting, the container gets stopped with "lxc stop", the command 
just never returns. For "lxc restart" case, it will just stop the 
container (and the command will not return / will not start the 
container again).


The only thing which fixes that is server restart.

There is also no clear way to reproduce it reliably (other than running 
the server for long, and starting/stopping a large number of containers 
over that time...).


I think it's some kernel issue, but unfortunately I was not able to 
debug this any further.




Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Creating a custom LXC container

2017-04-05 Thread Spike
are you suing lxc or lxd? in case it helps, I made a whole bunch of custom
containers by followed this simple process (which came from:
https://stgraber.org/2016/03/30/lxd-2-0-image-management-512/) :

- download image (xenial in my case from ubuntu: )
- lxc exec c1 /bin/bash
- make all the changes I want
- lxc stop c1
- lxc publish c1 (gets published to local: repository)

best,

Spike

On Wed, Apr 5, 2017 at 3:05 PM Nicholas Chambers <
nchamb...@lightspeedsystems.com> wrote:

> Hello! I'm working on a code evaluation bot, and want to make a custom
> container for it to work in or out of. Would I just need to modify [1],
> and it will generate the container for me?
>
>
>  [1] https://github.com/lxc/lxc/blob/master/templates/lxc-ubuntu.in
>
> --
> Nicholas Chambers
> Technical Support Specialist
> nchamb...@lightspeedsystems.com
> 1.800.444.9267 <(800)%20444-9267>
> www.lightspeedsystems.com
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Creating a custom LXC container

2017-04-05 Thread Nicholas Chambers
Hello! I'm working on a code evaluation bot, and want to make a custom 
container for it to work in or out of. Would I just need to modify [1], 
and it will generate the container for me?



[1] https://github.com/lxc/lxc/blob/master/templates/lxc-ubuntu.in

--
Nicholas Chambers
Technical Support Specialist
nchamb...@lightspeedsystems.com
1.800.444.9267
www.lightspeedsystems.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Newer upstream releases - Stable for production?

2017-04-05 Thread Stéphane Graber
Yes, it would be.

I also disagree that it's what most people would want.

The majority of the feedback we've been getting from production users so
far is that they're very happy having an extremely stable version of LXD
that they don't need to think about and that gets frequent bugfixes and
security fixes.

For everyone else, you just need to run:

apt install -t xenial-backports lxd lxd-client

On Wed, Apr 05, 2017 at 11:45:32PM +0200, Jakob Gillich wrote:
> Would it be against distribution policy to upgrade the lxd package in
> xenial? I feel like most users do not want 2.0, but that's what they get by
> default.
> 
> On Wed, Apr 5, 2017 at 1:49 AM, Stéphane Graber  wrote:
> > Hi,
> > 
> > So it really depends on how tolerant you may be to accidental downtime
> > and need to occasionaly adapt scripts as new features are added.
> > 
> > LXD 2.0.x only gets bugfixes and security updates and so an upgrade will
> > never break anything that uses the LXD commands or the API.
> > 
> > 
> > For the newer feature releases, we don't break the REST API, only add
> > bits to it, but occasionaly those bits mean that some extra
> > configuration steps may be needed, as was the case with the network API
> > in 2.3 or the storage API in 2.9.
> > 
> > Upgrading to such releases will automatically attempt to migrate your
> > setup so that it keeps working and doesn't suffer any downtime. But it's
> > certainly not completely bug free and we do occasionaly hit issues
> > there.
> > 
> > 
> > If you do want the new features, I'd recommend that you at least stay on
> > Ubuntu 16.04 LTS, then do this:
> > 
> > apt install -t xenial-backports lxd lxd-client
> > 
> > This will install lxd and lxd-client from "xenial-backports" which is a
> > special pocket of the main Ubuntu archive. This is far preferable from
> > using the LXD PPA.
> > 
> > The LXD stable PPA is automatically generated whenever a new upstream
> > release has hit the current Ubuntu development release and has passed
> > automatic testing, which is to say that when an update hits, it would
> > have seen very little field testing.
> > 
> > xenial-backports is different in that the packages in there are the same
> > as the PPA, but I only push them through once I feel confident there
> > aren't any upgrade issues that we should address.
> > 
> > 
> > One recent example of that was the storage API. PPA users would have
> > gotten LXD 2.9, 2.9.1, 2.9.2, 2.10, 2.10.1 and 2.11 in quick sucession
> > as we were sorting out some upgrade issues with the storage API.
> > 
> > Users of xenial-backports were on LXD 2.8 up until yesterday when I
> > pushed LXD 2.12 to it as we are now feeling confident that all upgrade
> > issues that were reported have been satisfyingly resolved.
> > 
> > 
> > One last note. LXD doesn't support downgrading its database, that means
> > that if you upgrade from 2.0.x to some 2.x release, there is no going
> > back. You can't downgrade back to 2.0.x afterwards. You can move LXD
> > containers from a new release to a server running an older release as we
> > way to do a two stage downgrade, but you may need to alter their
> > configurations a bit for this to succeed (remove any option key that
> > came from a newer release).
> > 
> > Stéphane
> > 
> > On Tue, Apr 04, 2017 at 02:55:32PM +0200, Gabriel Marais wrote:
> > >  Hi Guys
> > > 
> > >  I would like to take advantage in some of the new(er) features
> > > available in
> > >  releases higher than 2.0.x
> > > 
> > >  Would it be advisable to upgrade to 2.12 to be used in a production
> > >  environment?
> > > 
> > > 
> > > 
> > >  --
> > > 
> > > 
> > > 
> > > 
> > >  Regards
> > > 
> > >  Gabriel Marais
> > > 
> > >  Office: +27 861 466 546 x 7001
> > >  Mobile: +27 83 663 
> > >  Mail: gabriel.j.mar...@gmail.com
> > > 
> > >  Unit 11, Ground Floor, Berkley Office Park
> > >  Cnr Bauhinia & Witch Hazel Str,
> > >  Highveld, Centurion, South-Africa
> > >  0157
> > > 
> > >  PO Box 15846, Lyttelton, South Africa, 0140
> > >  ___
> > >  lxc-users mailing list
> > >  lxc-users@lists.linuxcontainers.org
> > >  http://lists.linuxcontainers.org/listinfo/lxc-users
> > 
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Newer upstream releases - Stable for production?

2017-04-05 Thread Jakob Gillich
Would it be against distribution policy to upgrade the lxd package in 
xenial? I feel like most users do not want 2.0, but that's what they 
get by default.


On Wed, Apr 5, 2017 at 1:49 AM, Stéphane Graber  
wrote:

Hi,

So it really depends on how tolerant you may be to accidental downtime
and need to occasionaly adapt scripts as new features are added.

LXD 2.0.x only gets bugfixes and security updates and so an upgrade 
will

never break anything that uses the LXD commands or the API.


For the newer feature releases, we don't break the REST API, only add
bits to it, but occasionaly those bits mean that some extra
configuration steps may be needed, as was the case with the network 
API

in 2.3 or the storage API in 2.9.

Upgrading to such releases will automatically attempt to migrate your
setup so that it keeps working and doesn't suffer any downtime. But 
it's

certainly not completely bug free and we do occasionaly hit issues
there.


If you do want the new features, I'd recommend that you at least stay 
on

Ubuntu 16.04 LTS, then do this:

apt install -t xenial-backports lxd lxd-client

This will install lxd and lxd-client from "xenial-backports" which is 
a

special pocket of the main Ubuntu archive. This is far preferable from
using the LXD PPA.

The LXD stable PPA is automatically generated whenever a new upstream
release has hit the current Ubuntu development release and has passed
automatic testing, which is to say that when an update hits, it would
have seen very little field testing.

xenial-backports is different in that the packages in there are the 
same

as the PPA, but I only push them through once I feel confident there
aren't any upgrade issues that we should address.


One recent example of that was the storage API. PPA users would have
gotten LXD 2.9, 2.9.1, 2.9.2, 2.10, 2.10.1 and 2.11 in quick sucession
as we were sorting out some upgrade issues with the storage API.

Users of xenial-backports were on LXD 2.8 up until yesterday when I
pushed LXD 2.12 to it as we are now feeling confident that all upgrade
issues that were reported have been satisfyingly resolved.


One last note. LXD doesn't support downgrading its database, that 
means

that if you upgrade from 2.0.x to some 2.x release, there is no going
back. You can't downgrade back to 2.0.x afterwards. You can move LXD
containers from a new release to a server running an older release as 
we

way to do a two stage downgrade, but you may need to alter their
configurations a bit for this to succeed (remove any option key that
came from a newer release).

Stéphane

On Tue, Apr 04, 2017 at 02:55:32PM +0200, Gabriel Marais wrote:

 Hi Guys

 I would like to take advantage in some of the new(er) features 
available in

 releases higher than 2.0.x

 Would it be advisable to upgrade to 2.12 to be used in a production
 environment?



 --




 Regards

 Gabriel Marais

 Office: +27 861 466 546 x 7001
 Mobile: +27 83 663 
 Mail: gabriel.j.mar...@gmail.com

 Unit 11, Ground Floor, Berkley Office Park
 Cnr Bauhinia & Witch Hazel Str,
 Highveld, Centurion, South-Africa
 0157

 PO Box 15846, Lyttelton, South Africa, 0140
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users


--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] right list for LXD?

2017-04-05 Thread Serge E. Hallyn
Quoting gunnar.wagner (gunnar.wag...@netcologne.de):
> hi everybody,
> 
> I want to start using LXD. Is this list the right one for seeking
> advice or is there any specific LXD mailing list?

This is the place.  Welcome.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] right list for LXD?

2017-04-05 Thread gunnar.wagner

hi everybody,

I want to start using LXD. Is this list the right one for seeking advice 
or is there any specific LXD mailing list?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Enabling real time support in containers

2017-04-05 Thread Serge E. Hallyn
Quoting Peter Steele (pwste...@gmail.com):
> On 03/31/2017 10:16 AM, Peter Steele wrote:
> >As you can see, the sched_setscheduler() call fails with an EPERM
> >error. This same app runs fine on the host.
> >
> >Ultimately I expect this app to fail when run under my container
> >since I have not given the container any real time bandwidth. I
> >had hoped the option
> >
> >lxc.cgroup.cpu.rt_runtime_us = 475000
> >
> >would do the trick but this option is rejected with anything other
> >than "0". So presumably this isn't the correct way to give a
> >container real time bandwidth.
> >
> >I have more experience with the libvirt-lxc framework and I have
> >been able to enable real time support for containers under
> >libvirt. The approach used in this case involves explicitly
> >setting cgroup parameters, specifically
> >
> >/sys/fs/cgroup/cpu/machine.slice/cpu.rt_runtime_us
> >
> >under the host and
> >
> >/sys/fs/cgroup/cpu/cpu.rt_runtime_us
> >
> >under the container. For example, I might do something like this:
> >
> >echo 50 >/sys/fs/cgroup/cpu/machine.slice/cpu.rt_runtime_us
> >--> on the host
> >echo 25000 >/sys/fs/cgroup/cpu/cpu.rt_runtime_us  --> on a container
> >
> >These do not work for LXC based containers though.
> >
> 
> The test code I'm running can be simplified to just this simple sequence:
> 
> #include 
> #include 
> 
> int main() {
> struct sched_param param;
> param.sched_priority = 50;
> const int myself  =  0; // 0 is the PID of ourself
> if (0 != sched_setscheduler(myself, SCHED_FIFO, )) {
> printf("Failure\n");
> return -1;
> }
> 
> printf("Success\n");
> return 0;
> }
> 
> On a container with RT support enabled, this should print "Success".
> 
> Am I correct in assuming LXC *does* provide a means to enable RT
> support? If not, we will need to another approach to this problem.

The kernel has hardcoded checks (which are not namespaced) that
if you are not (global) root, you cannot set or change the rt
policy.  I suspect there is a way that could be safely relaxed
(i.e. if a container has exclusie use of a cpu), but we'd have
to talk to the scheduling experts about what would make sense.
(see 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/sched/core.c?id=refs/tags/v4.11-rc5#n4164
)

Otherwise, as a workaround (assuming this is the only problem you
hit) you could simply make sure that the RT policy is correct ahead
of time and the priority is high enough that the application is only
lowering it, then the kernel wouldn't stop it.  Certainly that's
more fragile.  Or you could get fancier and LD_PRELOAD to catch
sys_setscheduler and redirect to an api over a socket to a tiny
deamon on the host kernel which sets it up for you...  But certainly
it would be best for everyone if this was supported in the kernel the
right way.

-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Enabling real time support in containers

2017-04-05 Thread Peter Steele

On 03/31/2017 10:16 AM, Peter Steele wrote:
As you can see, the sched_setscheduler() call fails with an EPERM 
error. This same app runs fine on the host.


Ultimately I expect this app to fail when run under my container since 
I have not given the container any real time bandwidth. I had hoped 
the option


lxc.cgroup.cpu.rt_runtime_us = 475000

would do the trick but this option is rejected with anything other 
than "0". So presumably this isn't the correct way to give a container 
real time bandwidth.


I have more experience with the libvirt-lxc framework and I have been 
able to enable real time support for containers under libvirt. The 
approach used in this case involves explicitly setting cgroup 
parameters, specifically


/sys/fs/cgroup/cpu/machine.slice/cpu.rt_runtime_us

under the host and

/sys/fs/cgroup/cpu/cpu.rt_runtime_us

under the container. For example, I might do something like this:

echo 50 >/sys/fs/cgroup/cpu/machine.slice/cpu.rt_runtime_us  
--> on the host

echo 25000 >/sys/fs/cgroup/cpu/cpu.rt_runtime_us  --> on a container

These do not work for LXC based containers though.



The test code I'm running can be simplified to just this simple sequence:

#include 
#include 

int main() {
struct sched_param param;
param.sched_priority = 50;
const int myself  =  0; // 0 is the PID of ourself
if (0 != sched_setscheduler(myself, SCHED_FIFO, )) {
printf("Failure\n");
return -1;
}

printf("Success\n");
return 0;
}

On a container with RT support enabled, this should print "Success".

Am I correct in assuming LXC *does* provide a means to enable RT 
support? If not, we will need to another approach to this problem.


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] preferred way to redirect ports to containers with private IPs?

2017-04-05 Thread MonkZ
This depends on what you plan to achive and what your possibilities are
big time.

I've a mixture of iptables + haproxy/nginx.
Gladly LXD remembers MAC and IP Addresses so manual entries in iptables
is not the problem.
iptables-persistent for reloading iptable rules

For http/https/imaps i use haproxy/nginx as reverseproxy to serve
multiple containers on one public IPv4. (SNI to the rescue)

For IPv6 i've just a profile that adds a new network interface -
attached to a network that has a routed ipv6-prefix.

Regards
MonkZ

On 05.04.2017 11:41, Tomasz Chmielewski wrote:
> Is there any "preferred" way of redirecting ports to containers with
> private IPs, from host's public IP(s)?
> 
> 
> host 12.13.14.15:53/udp (public IP) -> container 10.1.2.3:53/udp
> (private IP)
> 
> 
> I can imagine at least a few approaches:
> 
> 1) in kernel:
> 
> - use iptables to map a port from host's public IP to container's
> private IP
> 
> - use LVS/ipvs/ldirectord to map a port from host's public IP to
> container's private IP
> 
> 
> 2) userspace:
> 
> - use a userspace proxy, like haproxy (won't work for all protocols,
> some information is lost for the container, i.e. origin IP)
> 
> 
> They all however need some manual (or scripted) configuration, will stay
> even if the container is stopped/removed (unless some more
> configuration/scripting is done etc.).
> 
> 
> Does LXD have any built-in mechanism to "redirect ports"? Or, what would
> be the preferred way to do it?
> 
> 
> Tomasz Chmielewski
> https://lxadm.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] preferred way to redirect ports to containers with private IPs?

2017-04-05 Thread Tomasz Chmielewski
Is there any "preferred" way of redirecting ports to containers with 
private IPs, from host's public IP(s)?



host 12.13.14.15:53/udp (public IP) -> container 10.1.2.3:53/udp 
(private IP)



I can imagine at least a few approaches:

1) in kernel:

- use iptables to map a port from host's public IP to container's 
private IP


- use LVS/ipvs/ldirectord to map a port from host's public IP to 
container's private IP



2) userspace:

- use a userspace proxy, like haproxy (won't work for all protocols, 
some information is lost for the container, i.e. origin IP)



They all however need some manual (or scripted) configuration, will stay 
even if the container is stopped/removed (unless some more 
configuration/scripting is done etc.).



Does LXD have any built-in mechanism to "redirect ports"? Or, what would 
be the preferred way to do it?



Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] can't get kvm to work inside lxc

2017-04-05 Thread Marat Khalili

Just making it run is as simple as I wrote before, just 3 commands:


# apt install wget qemu-kvm

# wget 
https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img


# kvm -curses ubuntu-16.04-server-cloudimg-amd64-disk1.img

You should see it booting. I used a script to create fresh privileged 
LXC container, but there's nothing kvm-specific in that script, just 
some network configuration and user preferences. You said you use LXD, 
but I don't think there's big difference.


You'll have to solve more problems to actually make it useful:
* give it more space with qemu-img resize;
* access virtual system: define password or ssh key with cloud-localds;
* share network with VM: -netdev bridge,id=br0 -device 
virtio-net,netdev=br0,mac="$MAC_ADDRESS";
* configure static IP address: using cloud-localds rewrite file in 
/etc/network/interfaced.d and reboot the system;

* share local storage with VM: -virtfs and mount with 9p.
* control VM with scripts: -monitor unix:... and socat;
* monitor boot with scripts: -serial unix... and socat;
* start and stop VM with container: systemd;
* ...
(br0 above is a bridge inside the container; you'll need to create it 
and to forward /dev/net/tun for this to work.)


There's a lot of info about all this scattered throughout the internet, 
but no single page; I'll think about writing details up somewhere.


--

With Best Regards,
Marat Khalili

On 04/04/17 20:22, Spike wrote:

Marat,

any chance you could share a little more about the steps you took to 
start kvm manually? that'd be most useful to get things started. If 
you wrote up that experience somewhere a link would be most welcomed too.


thank you,

Spike

On Mon, Apr 3, 2017 at 11:36 PM Marat Khalili > wrote:


Hello,

I was able to run kvm in a privileged lxc container without any
modifications of lxc stock config (well, with some related to network
bridge). I gave up on libvirt and start containers with
qemu-system-x86_64 and systemd. You may want to try downloading ubuntu
cloud image from
https://cloud-images.ubuntu.com/releases/16.04/release/
and starting it with kvm -curses to see if it works.

--

With Best Regards,
Marat Khalili

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org

http://lists.linuxcontainers.org/listinfo/lxc-users



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users