Re: [lxc-users] No swap space inside containers

2020-08-08 Thread Kees Bos
On Sat, 2020-08-08 at 12:20 -0300, Luis Felipe Marzagao wrote:

> 
> Any pointers?
> 


https://discuss.linuxcontainers.org/t/invalid-swaptotal-in-proc-meminfo-swaptotal-0/8231/17



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-27 Thread Kees Bos
I probably missed it, but which release are you using on the host?

And what's the output of
prlimit -p 1
?

On Mon, May 27, 2019, 1:52 PM Saint Michael  wrote:

> My applications are very complex and involved many applications in the
> traditional sense. It is a nightmare to install them.
> My application runs on Centos but I prefer to use Ubuntu as LXC host.
> I found that rsynching a container over the WAN is the only perfect way to
> deploy.
> The issue that kills me is why I can change some kernel parameters, but
> not for example
> net.core.rmem_max = 67108864
> net.core.wmem_max = 33554432
> net.core.rmem_default = 31457280
> net.core.wmem_default = 31457280
> Any idea?
>
>
>
>
>
> On Mon, May 27, 2019 at 2:57 AM Jäkel, Guido  wrote:
>
>> Dear Michael,
>>
>> > For me, the single point of using LXC is to be able to redeploy
>> a complex
>> > app from host to host in a few minutes. I use
>> one-host->one-Container. So
>> > what is the issue of giving all power to the containers?
>>
>> I don't understand yet, why you want to use Containers, LXC or Dockers at
>> all: You need to have full access to the host and it hardware at low level
>> and don't want to use any isolation or virtualization aspects at all. If
>> you just want to redeploy a complex setup within minutes, you may just need
>> to use a prepared backup of your hosts, or an layered setup with an
>> read-only image and an writeable layer for the changes.
>>
>> Guido
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] why does ssh + lxc hang? (used to work)

2019-02-24 Thread Kees Bos
Did you try '-T, --force-noninteractive' ?
(Disable pseudo-terminal allocation)

i.e.
laptop$ ssh root@host "/snap/bin/lxc exec container -T -- date"



On Sun, 2019-02-24 at 21:33 +0900, Tomasz Chmielewski wrote:
> This works (executed on a host):
> 
> host# lxc exec container -- date
> Sun Feb 24 12:25:21 UTC 2019
> host#
> 
> This however hangs and doesn't return (executed from a remote
> system, 
> i.e. your laptop or a different server):
> 
> laptop$ ssh root@host "export PATH=$PATH:/snap/bin ; lxc exec
> container 
> -- date"
> Sun Feb 24 12:28:04 UTC 2019
> (...command does not return...)
> 
> Or a direct path to lxc binary - also hangs:
> 
> laptop$ ssh root@host "/snap/bin/lxc exec container -- date"
> Sun Feb 24 12:29:54 UTC 2019
> (...command does not return...)
> 
> 
> Of course a simple "date" execution via ssh on the host does not
> hang:
> 
> laptop$ ssh root@host date
> Sun Feb 24 12:31:33 UTC 2019
> laptop$
> 
> 
> Why do commands executed via ssh and lxc hang? It used to work some
> 1-2 
> months ago, not sure with which lxd version it regressed like this.
> 
> 
> Tomasz Chmielewski
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] zfs @copy snapshots

2019-01-26 Thread Kees Bos
Thanks a lot.

It looks like that these @copy are left overs from failed container
snapshots. I'm creating an destroying snapshots via the api (pylxd) and
I've seen @copy snapshots at creation time where I expected a @snapshot
zfs snapshot. I'm now destroying these @copy snapshots automatically
(which most of the time succeeds, every now and then there's still a
@snaphot dependency that blocks the destroy).



On Thu, 2019-01-24 at 14:35 -0500, Stéphane Graber wrote:
> Hi,
> 
> If ZFS lets you, then yes, but normally those will be here because
> you've created a container as a copy of this one, due to how zfs
> datasets work, that snapshot then has to remain until the container
> which was created from it is deleted.
> 
> Stéphane
> 
> On Wed, Jan 23, 2019 at 11:04 PM Kees Bos 
> wrote:
> > 
> > Hi,
> > 
> > 
> > I see multiple @copy snaphots on some containers (zfs)
> > 
> > From https://github.com/lxc/lxd/issues/5104 it is not clear to me
> > why
> > there are multiple on a container.
> > 
> > Can I safely remove these snapshots (if zfs lets me)?
> > 
> > 
> > Kees
> > 
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> 
> 


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] zfs @copy snapshots

2019-01-23 Thread Kees Bos
Hi,


I see multiple @copy snaphots on some containers (zfs)

From https://github.com/lxc/lxd/issues/5104 it is not clear to me why
there are multiple on a container.

Can I safely remove these snapshots (if zfs lets me)?


Kees

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] snap and global system

2018-12-27 Thread Kees Bos
Hi,

I'm using the bionic apt version of lxd. I checked the snap version,
but ran into issues with our use of lxc.hook.pre-start and
lxc.hook.post-stop

In these scripts we use a bunch of binaries/libraries that are not
available in the snap chroot. Did someone else bump into a similar
problem and what was the chosen solution?


Thanks,

Kees

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Kees Bos
On Thu, 2018-05-03 at 08:09 +0200, Kees Bos wrote:
> On Thu, 2018-05-03 at 12:58 +0900, Tomasz Chmielewski wrote:
> > 
> > Reproducing is easy:
> > 
> > # lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
> > 
> > 
> > Then wait a few secs until it starts - "lxc list" will show it has
> > IPv6 
> > address (if your bridge was configured to provide IPv6), but not
> > IPv4 
> > (and you can confirm by doing "lxc shell", too):
> > 
> > # lxc list
> > 
> > 
> 
> I can confirm this. Seeing the same issue.

BTW. It's the /etc/netplan/10-lxc.yaml

Not working (current) version:
network:
  ethernets:
eth0: {dhcp4: true}
version: 2


Working version (for me):
network:
  version: 2
  ethernets:
eth0:
  dhcp4: true
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] bionic image not getting IPv4 address

2018-05-03 Thread Kees Bos
On Thu, 2018-05-03 at 12:58 +0900, Tomasz Chmielewski wrote:
> 
> Reproducing is easy:
> 
> # lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
> 
> 
> Then wait a few secs until it starts - "lxc list" will show it has
> IPv6 
> address (if your bridge was configured to provide IPv6), but not
> IPv4 
> (and you can confirm by doing "lxc shell", too):
> 
> # lxc list
> 
> 

I can confirm this. Seeing the same issue.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD connectors for any web VM management platforms?

2018-02-02 Thread Kees Bos
On do, 2018-02-01 at 23:15 +0100, Michel Jansens wrote:
> Hi,
> 
> I’ve been looking around to get a web interface for customer portal/
> container management for lxc.
> I looked a bit at ManageIQ and Foreman, but found no provider for
> lxd.
> Do you know of any project that have lxd connectors/providers?
> 
> I know that lxd integrates in OpenStack at Canonical and OpenStack
> has providers  for both applications, but I would prefer to avoid it
> (too complex and heavy hardware requirements).
> 
> Alternatively, would there be a gateway that would offer a known API
> and translate/emulate it to lxd? ( Ovirt, VMware, Amazon, Azure,
> Google are a few well supported APIs)  
> 

I'm not sure if you're after this kind of integratio, but saltstack can
provision containers.

https://github.com/saltstack-formulas/lxd-formula


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [lxc-devel] Container startup hook arguments

2017-10-05 Thread Kees Bos
On do, 2017-10-05 at 10:27 +0200, Christian Brauner wrote:
> On Wed, Oct 04, 2017 at 09:35:25AM -0500, Serge Hallyn wrote:
> > 
> > Quoting Kees Bos (cornelis@gmail.com):
> > > 
> > > I'm not using it, but do expect the extra args:
> > > 
> > > while [ {{ '${#@}' }} -gt 3 ] ; do
> > >    ...
> > >    shift
> > > done
> > > 
> > > It might be that some users will need the last extra argument
> > > (stage:
> > > pre-start|start|post-stop). This is currently not available in
> > > the
> > > environment.
> > > 
> > > I can live without these extra arguments, but will have to update
> > > my
> > > scripts.
> > Ok, but this will hurt then.  I certainly was going to keep the
> > extra args, but they would be shifted now.  We can pass along an
> > environment variable saying something like LXC_SIMPLE_ARGS=1 or
> > something, but your unmodified script won't know to look for
> > that so will do the wrong thing.  Any ideas?
> > 
> > This unfortunately basically means that you are in fact a "user",
> > and that makes this seem like at best 3.0  material then, unless
> > we can find a good solution.
> > 
> > Maybe a configuration key 'lxc.hooks.version=2' ?
> I'm fine with simply keeping the arguments until 3.0 and then
> removing them. I
> really don't want to add configuration keys that conceptually are
> internal keys
> but are nonetheless exposed to users. Fwiw, this is also why I didn't
> implement
> a version key for the 2.1. config file format update. This is just
> going to bite
> us in the long run when we have to deprecate these internal keys.
> TL;DR, keep
> the args for now and kill them in 3.0.
> 
> Christian

I think that's least problematic approach :-)

On second thoughts, if we do support both ways, I don't think it should
be a setting in the container config but something that's system wide
(since it's dependent on the installed lxd and not on the container).
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container startup hook arguments

2017-10-04 Thread Kees Bos
On wo, 2017-10-04 at 09:35 -0500, Serge E. Hallyn wrote:
> Quoting Kees Bos (cornelis@gmail.com):
> > 
> > I'm not using it, but do expect the extra args:
> > 
> > while [ {{ '${#@}' }} -gt 3 ] ; do
> >    ...
> >    shift
> > done
> > 
> > It might be that some users will need the last extra argument
> > (stage:
> > pre-start|start|post-stop). This is currently not available in the
> > environment.
> > 
> > I can live without these extra arguments, but will have to update
> > my
> > scripts.
> Ok, but this will hurt then.  I certainly was going to keep the
> extra args, but they would be shifted now.  We can pass along an
> environment variable saying something like LXC_SIMPLE_ARGS=1 or
> something, but your unmodified script won't know to look for
> that so will do the wrong thing.  Any ideas?
> 
> This unfortunately basically means that you are in fact a "user",
> and that makes this seem like at best 3.0  material then, unless
> we can find a good solution.
> 
> Maybe a configuration key 'lxc.hooks.version=2' ?

Yep. Or 'lxc.hook.version=2'. However, that would mean you've got to
support is for quite some time. I don't know if that's worth the
effort. It would also mean that you've got to update (lots?) of
container configs. I'm not sure what would be the most painful way to
go.

It would be nice if you define e.g. LXC_HOOK_STAGE=pre-start in the
environment (which is currently not available). Then it's possible to
use a single script to handle multiple stages (which I don't do, but is
currently possible based on the last argument).

This new environment variable can also be used to prepare scripts for
changing behaviour. Without that, my scripts can be prepared for this
with e.g.:
for lastarg; do true; done
if $lastarg = 'pre-start' ; then old-style ; etc; fi


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Container startup hook arguments

2017-10-04 Thread Kees Bos
I'm not using it, but do expect the extra args:

while [ {{ '${#@}' }} -gt 3 ] ; do
   ...
   shift
done

It might be that some users will need the last extra argument (stage:
pre-start|start|post-stop). This is currently not available in the
environment.

I can live without these extra arguments, but will have to update my
scripts.


On di, 2017-10-03 at 14:22 -0500, Serge E. Hallyn wrote:
> Quoting Andrey Repin (anrdae...@yandex.ru):
> > 
> > Greetings, Serge Hallyn!
> > 
> > > 
> > > Since the start, lxc container startup hooks have gotten some
> > > redundant
> > > information as command line arguments, which is also available as
> > > environment variables.
> > > 
> > > Is anyone making use of that? I'm wondering whether any existing
> > > installations would have broken scripts if we get rid of those.
> > > 
> > > https://github.com/lxc/lxc/issues/1766 is one sensible request to
> > > stsending
> > > these args, and I suspect that CNI binaries will also not like
> > > them.
> > > 
> > > Removing them is probably 3.0 material, as even if noone replies
> > > saying
> > > they use them, our community doesn't exactly work like that...
> > > but it sure would be nice to drop them :) 
> > Consider me +1 to that.
> > If your script needs to know its environment, it should make use of
> > it. Other
> > than that, the extra arguments unexpectedly passed to the hook are
> > always a
> > source for potential confusion.
> Thanks.  "I've never used it" is also useful info :)
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Mounting squashfs inside a container

2017-05-31 Thread Kees Bos
On di, 2017-05-30 at 15:17 -0700, Ben Warren wrote:
> Hi,
> 
> I’m using an LXC to build up a rootfs for another target, and am
> unable to mount a squashfs image:
> 
> root@cd-build-dev-385:~# mount -t squashfs -r myproject.squashfs mnt
> ioctl: LOOP_SET_STATUS: Operation not permitted
> root@cd-build-dev-385:~#
> 
> If I instead use ‘unsquashfs’, I get into device creation errors:
> 
> root@cd-build-dev-385:~# unsquashfs -x myproject.squashfs 
> Parallel unsquashfs: Using 4 processors
> 13529 inodes (15282 blocks) to write
> 
> [|   
>   ]21/15282   0%
> create_inode: failed to create character device squashfs-
> root/dev/console, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/null, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/ptmx, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/urandom, because Operation not permitted
> create_inode: failed to create character device squashfs-
> root/dev/zero, because Operation not permitted
> 
> 
> I assume the two issues are related, assuming that creation of device
> nodes within an unprivileged container is prohibited.  In my case I’m
> less concerned about security, and am using containers more for
> encapsulation.
> 
> Is there a configuration override that will allow dynamic device
> creation within a container, or another way of going about this?  I
> know that I can add device nodes externally using ‘lxc device add …’
> and have used it for creating loopback devices, but that’s static.
> 
> Environment:
> host: Ubuntu 14.04
> LXC:
> ben@ben-sc:~$ dpkg -l | grep lx[cd]
> ii  liblxc1   2.0.7-
> 0ubuntu1~14.04.1skyport1 amd64Linux Containers
> userspace tools (library)
> ii  lxc-common2.0.7-
> 0ubuntu1~14.04.1skyport1 amd64Linux Containers
> userspace tools (common tools)
> ii  lxcfs 2.0.6-
> 0ubuntu1~14.04.1 amd64FUSE based
> filesystem for LXC
> ii  lxd   2.0.9-
> 0ubuntu1~14.04.1 amd64Container
> hypervisor based on LXC - daemon
> ii  lxd-client2.0.9-
> 0ubuntu1~14.04.1 amd64Container
> hypervisor based on LXC - client
> 
> Note that I’ve built the LXC libraries from source, but based on the
> current ‘ubuntu-trusty-backports’ .deb packages.
> 
> regards,
> Ben
> 
> 

I think you'll have to use either a privileged container or use
squashfuse and set privileges for fuse (if still needed).

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] near-live migration

2017-05-31 Thread Kees Bos
Ah, I see.

https://github.com/lxc/lxd/issues/3326

It looks like that's a prerequisite for something like
lxc move c1 s-tollana: --stateless
on a running container.


On di, 2017-05-30 at 12:15 -0400, Ron Kelley wrote:
> While not a direct answer, I filled an enhancement bug recently for
> this 
> exact topic (incremental snapshots to remote server). The enhancement
> was 
> approved, but I don't know when it will be included in the next LXD
> version.
> 
> 
> On May 30, 2017 11:52:34 AM Kees Bos <cornelis@gmail.com> wrote:
> 
> > 
> > Hi,
> > 
> > Right now I'm using the sequence 'stop - move - start' for
> > migration of
> > containers (live migration fails too often).
> > 
> > The 'move' step can take some time. I wonder if it would be easy to
> > implement/do something like:
> >   - prepare move (e.g. take snapshot an copy upto snaphot)
> >   - stop
> >   - copy the rest
> >   - remove snapshot on dst
> >   - remove container from src
> >   - start container on dst
> > 
> > That would minimize downtime without the complexity of a live
> > migration.
> > 
> > What are your thoughts?
> > 
> > Kees
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] near-live migration

2017-05-30 Thread Kees Bos
Hi,

Right now I'm using the sequence 'stop - move - start' for migration of
containers (live migration fails too often).

The 'move' step can take some time. I wonder if it would be easy to
implement/do something like:
  - prepare move (e.g. take snapshot an copy upto snaphot)
  - stop
  - copy the rest
  - remove snapshot on dst
  - remove container from src
  - start container on dst

That would minimize downtime without the complexity of a live
migration.

What are your thoughts?

Kees
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Pre and post flight checks for container start/stop

2016-12-22 Thread Kees Bos
Hi,

Is it possible to do some pre-flight checks before starting a
container. E.g. to verify network connectivity before starting the
container, or to check in a central 'database' that the container isn't
running on a different host and register it? Note, that the preflight
check should be able to cancel a startup.

And similar on stopping a container execute some commands e.g. yo
deactivate registration of the container in the central.


Cheers,

Kees
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users