Re: [lxc-users] Upcoming migration of linuxcontainers.org mailing-lists

2021-04-08 Thread Stéphane Graber
I did look into this a little while back, from what I remember the
setup they have for lore is:

 - majordomo as the mailing list backend (alternative to mailman)
 - public-inbox as a subscriber to the lists to give you the git layer
and mbox files

So it's actually more complex than what we currently run. Though if
there's interest (someone willing to test and document the setup), we
should be able to setup the public-inbox part still while using Google
Groups as the mailing list handler.

On Thu, Apr 8, 2021 at 9:14 AM Serge E. Hallyn  wrote:
>
> On Wed, Apr 07, 2021 at 10:59:07PM -0400, Stéphane Graber wrote:
> > Hello,
> >
> > As part of moving some of our services to newer infrastructure,
> > consolidating things where they make sense and in general reducing the
> > amount of time I need to spend maintaining things, the
> > linuxcontainers.org mailing-lists will soon be migrated over to Google
> > Groups.
>
> Not asking you to do this as google groups sounds like the least
> amount of work, but did you consider going to something based
> on the lore.kernel.org software?
>
> I mean meh - we're not using the lists much, but we'd have more
> of a sense of community if we used email more, and lore, by
> letting you download any thread .mbox I think makes that much
> better than dealing with something like mailman.
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Upcoming migration of linuxcontainers.org mailing-lists

2021-04-07 Thread Stéphane Graber
Hello,

As part of moving some of our services to newer infrastructure,
consolidating things where they make sense and in general reducing the
amount of time I need to spend maintaining things, the
linuxcontainers.org mailing-lists will soon be migrated over to Google
Groups.

linuxcontainers.org has been a Google Workspace domain for a couple of
years now and all e-mails in and out of our mailing-lists were already
routed through Google's SMTP infrastructure with mailman used at the
mailing-list engine on our end.

As I'm re-deploying all linuxcontainers.org services on new hardware
and upgrading to more recent Linux distributions, I was left with the
following options:
 - Keep lists.linuxcontainers.org running on an older distro to keep mailman2
 - Bite the bullet and go through a full migration to mailman3 (which
for just 3 mailing-lists looks very difficult)
 - Move to an alternative solution

As we're already using Google for all e-mail delivery and the existing
Google Workspace plan I'm using includes the full version of Google
Groups, that seems the easiest approach, completely eliminating any
need for maintenance.

I've already migrated a couple of lists I used to host over to it and
have been able to successfully do it while retaining the entirety of
the mailing-list archive.
I intend to do the same for the linuxcontainers.org lists, effectively
stopping e-mail delivery to the lists, transfer the existing archive
over to Google Groups and then move all members over.

For existing members, this should be pretty seamless with the main
difference being for new subscribers where
https://lists.linuxcontainers.org will now send people to Google
Groups for membership management and for access to the archive. To
avoid breaking all links on Google, I'll keep a static version of the
old mailing-list archive online at their current URL so that any link
to them will keep working in the future.


I expect the transition to happen over the next week or so. I'll be
replying to this e-mail once it's all done.

Thanks!

Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxd-p2c

2021-02-22 Thread Stéphane Graber
We automatically build lxd-p2c with every single commit we include and
with every branch we review.
There are a number of people who recently used it successfully too,
though maybe you're hitting some kind of rsync issue.

If you have a Github account, you can get the most recent build
artifact for Linux from
https://github.com/lxc/lxd/actions/runs/589179878

(No idea why Github restricts artifacts to logged in users...)

On Mon, Feb 22, 2021 at 1:36 AM Aleksandar Ivanisevic
 wrote:
>
> Hi,
>
> what is the status of lxd-p2c? Is this still maintained? All the precompiled 
> binaries I could find are failing with protocol errors and when I trying to 
> build it myself (go get github.com/lxc/lxd/lxd-p2c) it just finishes without 
> errors, but without producing the binary either.
>
> Do I even have a chance to use it to migrate some old centos6 machines, 
> considering the discussion at 
> https://discuss.linuxcontainers.org/t/minimum-requirement-for-lxd-p2c/1687/16 
> especially regarding “kernel too old” messages with a static binary?
>
> regards,
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD distrobuilder: what's required for the yaml file

2021-01-17 Thread Stéphane Graber
You'll need to add a new `source` plugin to the distrobuilder codebase
at https://github.com/lxc/distrobuilder, once that's done, you should
be able to take the standard centos/oracle YAML from
https://github.com/lxc/lxc-ci and use it with the source changed
accordingly.

On Tue, Jan 12, 2021 at 3:55 PM Steven G. Spencer
 wrote:
>
> All,
>
> Am volunteering for a project that is doing a 1-for-1 reinvention of Red
> Hat enterprise, similar to what CentOS used to be. The project has no
> build candidate as yet, but I'd like to know what is required for the
> yaml file to create an LXD container using distrobuilder for what will
> be a new OS.
>
> Thanks,
>
> Steven G. Spencer
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxd-agent and Snap package

2020-12-07 Thread Stéphane Graber
That's because we've since added support for virtiofs as an
alternative to 9p, fixing the CentOS issue so long as your qemu
supports virtiofsd.
We obviously ship virtiofsd in the snap package but have no control as
to what individual distributions support.

In this case you're seeing the agent work when run with the snap as
virtiofsd is started but when running it with your distro's native
package, you only get 9p which then prevents the VM from starting the
agent.

On Sat, Nov 28, 2020 at 4:45 PM David Lakatos  wrote:
>
> Hi LXC community,
>
> I read about the situation on multiple forums that CentOS 7 and CentOS 8 LXD 
> VM images currently cannot start the lxd-agent.service due to a possible 
> CentOS kernel bug. It seems that I hit the same issue, but found something 
> really strange in the meantime, which I could not explain. Please help me out 
> if you have any idea why there is a difference between the two scenarios' 
> results:
>
> # id
> uid=0(root) gid=0(root) groups=0(root)
> # uname -a
> Linux dlakatos847-laptop 5.9.10-1-default #1 SMP Mon Nov 23 09:08:45 UTC 2020 
> (b7c3768) x86_64 x86_64 x86_64 GNU/Linux
>
> Scenario A: I use OpenSuSE Tumbleweed OSS repo's LXD package:
>
> # zypper in -y lxd
> ...
> Checking for file conflicts: 
> ...[done]
> (1/2) Installing: lxd-4.8-1.1.x86_64 
> ...[done]
> (2/2) Installing: lxd-bash-completion-4.8-1.1.noarch 
> ...[done]
> # systemctl start lxd.service
> # /usr/bin/lxc version
> Client version: 4.8
> Server version: 4.8
> # /usr/bin/lxd init
> ...
> # /usr/bin/lxc launch images:centos/7/amd64 centos --vm
> Creating centos
> Starting centos
> # /usr/bin/lxc ls
> ++-+---+---+-+---+
> | NAME   |  STATE  | IPV4  | IPV6 
>  |  TYPE   | SNAPSHOTS |
> ++-+---+---+-+---+
> | centos | RUNNING | 10.137.136.143 (eth0) | 
> fd42:3542:bbfa:726c:216:3eff:fe5f:670b (eth0) | VIRTUAL-MACHINE | 0 |
> ++-+---+---+-+---+
> # /usr/bin/lxc exec centos bash
> Error: Failed to connect to lxd-agent
>
> I checked, the lxd-agent.service could not start, looks like the same issue 
> as explained by Stéphane:
> https://lists.linuxcontainers.org/pipermail/lxc-users/2020-October/015342.html
>
> Scenario B: I use the Snap package:
>
> # snap install lxd
> lxd 4.8 from Canonical* installed
> # /snap/bin/lxc version
> Client version: 4.8
> Server version: 4.8
> # /snap/bin/lxd init
> ... (use the same bridge lxdbr0) ...
> # /snap/bin/lxc launch images:centos/7/amd64 centos --vm
> Creating centos
> Starting centos
> # /snap/bin/lxc ls
> ++-+--+---+-+---+
> |  NAME  |  STATE  | IPV4 | IPV6  
> |  TYPE   | SNAPSHOTS |
> ++-+--+---+-+---+
> | centos | RUNNING | 10.137.136.89 (eth0) | 
> fd42:3542:bbfa:726c:216:3eff:fe78:dc48 (eth0) | VIRTUAL-MACHINE | 0 |
> ++-+--+---+-+---+
> # /snap/bin/lxc exec centos bash
> [root@localhost ~]#
>
> If the lxd-agent.service issue really is a CentOS bug, why does it matter if 
> I use my distro's LXD or the Snap package? Why is it working, when I use the 
> Snap package?
>
> Thank you,
> David Lakatos
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] cluster node with different storage config

2020-10-16 Thread Stéphane Graber
Joining nodes should be empty, they should not have any existing
network or storage pool configured in LXD.

On Sun, Oct 4, 2020 at 3:19 AM Aleksandar Ivanisevic
 wrote:
>
> Hi,
>
> I’m trying to join a node to a lxd cluster that has different storage 
> configuration. My cluster uses storage pool ‘local’ in zfs pool ‘ee’ and the 
> new node uses ‘rpool/lxd’. Although the documentation 
> (https://linuxcontainers.org/lxd/docs/master/clustering.html) suggests that 
> should be possible, no matter what I put in the preseed yaml it always either 
> tries to import the zfs pool ‘ee’ or create an already existing pool ‘local'
>
> Does anyone know what is the correct syntax or is it possible at all?
>
> $ lxd init --preseed < /tmp/cluster.yaml
> $ cat /tmp/cluster.yaml
> cluster:
>   enabled: true
>   server_name: ${HOSTNAME%%.*}
>   server_address: ${HOSTNAME%%.*}:8443
>   cluster_address: lxd1:8443
>   cluster_certificate: "-BEGIN CERTIFICATE——
> ...
> -END CERTIFICATE-
> "
>   cluster_password: “..."
>   member_config:
>
> # no matter what i put after this line it is always the same errors
> # Error: Failed to join cluster: Failed to initialize member: Failed to 
> initialize storage pools and networks: Failed to create storage pool 'local': 
> Storage pool directory "/var/snap/lxd/common/lxd/storage-pools/local" already 
> exists
> # or, if the new member’s local pool is not called ‘local’ then it complains 
> about zpool import ‘ee’ failed.
>
>
>   - entity: storage-pool
> name: cluster
> key: source
> value: "rpool/lxd"
> key: zfs.pool_name
> value: "rpool/lxd"
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Missing lxd-agent on centos/8/cloud

2020-10-16 Thread Stéphane Graber
It's not really missing so much as can't start because of a kernel issue.

CentOS 8 randomly dropped the required 9p kernel module in an earlier
kernel release, breaking the agent.
We've filed a bug but got no interest in a fast resolution from CentOS
upstream unfortunately...

Stéphane

On Mon, Oct 12, 2020 at 11:27 AM Sebert, Holger.ext
 wrote:
>
> Hi,
>
> I created a virtual machine from the image `centos/8/cloud` using the 
> following command:
>
> $ lxc launch images:centos/8/cloud centos --vm --profile default 
> --profile vm
>
> My `vm`-profile looks like this:
>
> config:
>   user.user-data: |
> #cloud-config
> ssh_pwauth: yes
>
> users:
>   - name: root
> plain_text_passwd: "root"
> lock_passwd: false
> description: LXD profile for virtual machines
> devices:
>   config:
> source: cloud-init:config
> type: disk
> name: vm
> used_by:
> - /1.0/instances/centos
>
> Installation runs fine, but when I try to execute LXD commands, I get an 
> error message:
>
> $ lxc exec centos -- /bin/bash
> Error: Failed to connect to lxd-agent
>
> It seems like the `lxd-agent` package is missing.
>
> I logged in via `lxc console centos` and searched for a `yum`-package, but 
> did not find
> any:
>
> [root@centos ~]# yum search lxd-agent
> Last metadata expiration check: 0:07:46 ago on Mon 12 Oct 2020 03:16:47 
> PM UTC.
> No matches found.
>
> How do I install the `lxd-agent` and make `lxc` commands from outside the VM 
> work?
>
> Best,
> Holger
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxd-client on 20.04 focal

2020-07-13 Thread Stéphane Graber
If shoving a static binary in your Docker container is an option, it's
pretty simple to build the LXD client alone as a static binary.

Install "golang-go" and "git", then run "go get
github.com/lxc/lxd/lxc". This will take a little while as it downloads
the needed source trees.
Once done, you'll find a "lxc" binary in ~/go/bin/ which you can start
using as standalone client.

On Sun, Jul 12, 2020 at 10:45 PM Fajar A. Nugraha  wrote:
>
> On Mon, Jul 13, 2020 at 7:50 AM Logan V.  wrote:
> >
> > Typically I use lxd-client in jobs that run in docker containers, so the 
> > container has the lxd-client apt package installed. Now it seems that the 
> > lxd and lxd-client are just shims for the snap.
> >
> > Since it seems like installing snaps in docker (ie. environments without 
> > snapd running) is very difficult, I'm curious if any consideration has been 
> > given to how lxd-client can be installed aside from snaps? Are there any 
> > ppas or something I should be using instead?
>
> What are you using lxd client for?
>
> If it's as simple as "creating a container" or "running lxc shell/lxc
> exec", IIRC old versions of lxd client (3.03 should still be available
> on previous distros) can connect to newer lxd server as well
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD - Production Hardware Guide

2020-06-06 Thread Stéphane Graber
ZFS very much prefers seeing the individual physical disks.
When dealing directly with full disks, it can do quite a bit to
monitor them and handle issues.
If it's placed behind RAID, whether hardware or software backed, all
that information disappears and it doesn't really know whether to
retry an operation or consider the disk to be bad.

The same is true if you ever end up using Ceph where you want a 1:1
mapping between physical disk and OSDs, so in general I'd recommend
against hardware RAID at this point.

Stéphane

On Fri, Jun 5, 2020 at 4:59 PM Steven Spencer  wrote:
>
> Andrey and List,
>
> Thanks so much for your response, and we understand all of that. We know that 
> if we have 3 containers the server configuration is going to be different 
> than if we have 30 or 100, and that we will have to size RAM, Processors, 
> etc, accordingly. What I think we are more interested in is: If we use ZFS, 
> is there a recommended way to use it? Should we use RAID of any kind? If so, 
> should it be hardware or software RAID? We realize, too, that we will need to 
> size our drives according to how much space we will actually need for the 
> number of containers we will be running. Really it's just about the 
> underlying file system for the containers. It seems like there should be a 
> basic white paper or something with just guidelines on best practices for 
> LXD. That would really help us. We have found the LXD documentation and we 
> have actually used these docs. We've even used ZFS under LXD on our first 
> iteration of this project about 3 years ago. We are now looking to do this 
> again. The first time was mostly a success. Recenly, we had the main LXD 
> server die and for no apparent reason (hardware / software / memory - the 
> logs don't really give us much). Our snapshot server was the savior, but now 
> we need to repeat our earlier process, and if we made mistakes, we would like 
> to fix them in the process.
>
> Thanks again for the response, any further information would be helpful.
>
> Steven G. Spencer
>
>
>
> On Fri, Jun 5, 2020 at 2:20 PM Andrey Repin  wrote:
>>
>> Greetings, Steven Spencer!
>>
>> > Is there a good link to use for specifying hardware requirements for an 
>> > LXD dedicated server?
>>
>> There can't be specific requirements, it all depends on what you want to do,
>> how many containers to run, etc.
>>
>>
>> --
>> With best regards,
>> Andrey Repin
>> Friday, June 5, 2020 22:05:54
>>
>> Sorry for my terrible english...
>>
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] /etc/sysctl.conf, /etc/security/limits.conf for LXD from snap?

2020-05-31 Thread Stéphane Graber
limits.conf not that much, LXD has logic to try and bump some of those
limits on startup.
sysctl.conf is still very relevant in production though.

On Wed, May 27, 2020 at 11:06 PM Tomasz Chmielewski  wrote:
>
> Are /etc/sysctl.conf and /etc/security/limits.conf changes documented on
> https://github.com/lxc/lxd/blob/master/doc/production-setup.md still
> relevant for LXD installed from snap (on Ubuntu 20.04)?
>
>
> Tomasz
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxc-create -t download

2020-04-26 Thread Stéphane Graber
We're planning to move away from GPG for lxc-download, or at least
move to opt-in GPG, so this kind of issue should go away for everyone
soon.

On Sun, Apr 26, 2020 at 12:35 AM Saint Michael  wrote:
>
> It does work, thanks. Is it possible to consider this a bug and add it to the 
> next update?
>
>
> On Sat, Apr 25, 2020 at 4:08 PM Stéphane Graber  wrote:
>>
>> Possibly GPG not being happy, try to run: export
>> DOWNLOAD_KEYSERVER=keyserver.ubuntu.com
>> And then run your lxc-create
>>
>> On Fri, Apr 24, 2020 at 4:52 PM Saint Michael  wrote:
>> >
>> > this works in an Ubuntu 18.04 host
>> > lxc-create -t download -n u1 -- --dist ubuntu
>> > but this does not
>> > lxc-create -t download -n u1 -- --dist debian
>> > Setting up the GPG keyring
>> > ERROR: Unable to fetch GPG key from keyserver
>> > lxc-create: u1: lxccontainer.c: create_run_template: 1626 Failed to create 
>> > container from template
>> > lxc-create: u1: tools/lxc_create.c: main: 319 Failed to create container u1
>> >
>> > what am I missing here?
>> > ___
>> > lxc-users mailing list
>> > lxc-users@lists.linuxcontainers.org
>> > http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>>
>>
>> --
>> Stéphane
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxc-create -t download

2020-04-25 Thread Stéphane Graber
Possibly GPG not being happy, try to run: export
DOWNLOAD_KEYSERVER=keyserver.ubuntu.com
And then run your lxc-create

On Fri, Apr 24, 2020 at 4:52 PM Saint Michael  wrote:
>
> this works in an Ubuntu 18.04 host
> lxc-create -t download -n u1 -- --dist ubuntu
> but this does not
> lxc-create -t download -n u1 -- --dist debian
> Setting up the GPG keyring
> ERROR: Unable to fetch GPG key from keyserver
> lxc-create: u1: lxccontainer.c: create_run_template: 1626 Failed to create 
> container from template
> lxc-create: u1: tools/lxc_create.c: main: 319 Failed to create container u1
>
> what am I missing here?
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] How to redo lxd init

2020-04-22 Thread Stéphane Graber
There is this script which attempts to wipe everything clean
https://github.com/lxc/lxd/blob/master/scripts/empty-lxd.sh

If that works properly, then you could run "lxd init" again.

An alternative is to remove the snap, reboot and clear the beginning
of any disk/partition that LXD was using directly (if any).
Then install the snap again and run "lxd init".

On Wed, Apr 22, 2020 at 6:15 AM Kees Bakker  wrote:
>
> Hi,
>
> I'm installing a new server with Ubuntu 20.04. Not everything
> was going as planned, so I want to redo lxd init.
>
> What is the best method to completely whipe LXD's state, so that
> I can redo it from scratch?
>
> (( I was a bit surprised that LXD can now only run from snap, but
> I guess that's progress. ))
> --
> Kees
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Announcing LXD, LXC and LXCFS 4.0 LTS

2020-03-31 Thread Stéphane Graber
Hello,

The LXD, LXC and LXCFS teams are very proud to announce their 4.0 LTS releases!

LTS versions of all 3 projects are released every 2 years, starting 6
years ago. Those LTS versions benefit from 5 years of security and
bugfix support from upstream and are ideal for production environments.

# LXD
LXD is our system container and virtual machine manager. It's a Go
application based on LXC and QEMU. It can run several thousand
containers on a single machine, mix in some virtual machines, offers a
simple REST API and can be easily clustered to handle large scale
deployments.

It takes seconds to setup on a laptop or a cloud instance, can run just
about any Linux distribution and supports a variety of resource limits
and device passthrough. It's used as the basis for Linux applications on
Chromebooks and is behind Travis-CI's recent Arm, IBM Power and IBM Z
testing capability.

The main highlights for this release are (compared with 3.0):

 - Support for running virtual machines
 - Introduction of projects (and their limits, restrictions and features)
 - System call interception for containers
 - Backup/restore of instances (as standalone tarball)
 - Automated snapshots (and expiration) for instances and storage volumes
 - Support for "shiftfs" for instances and attached disks
 - New "ipvlan" and "routed" NIC types
 - CephFS as a custom volume storage backend
 - Image replication and multi-architecture support in clusters
 - Role based access control (through Canonical RBAC)
 - Full host hardware reporting through the much extended resources API
 - CGroup2 support
 - Nftables support

4.0.0 release announcement:
https://discuss.linuxcontainers.org/t/lxd-4-0-lts-has-been-released/7231
Try LXD online: https://linuxcontainers.org/lxd/try-it/
Available images: https://images.linuxcontainers.org

# LXC
LXC is our container runtime. It's capable of running both system
containers and application containers (OCI). It's written as a C library
and set of tools with bindings available for a large number of
languages, including go-lxc as used by LXD.

The main highlights for this release are (compared with 3.0):

 - CGroup2 support
 - Infrastructure for system call interception
 - PIDfd support
 - Improved network handling
 - Hardening and refactoring throughout the codebase, fixing very many issues

4.0.0 release announcement:
https://discuss.linuxcontainers.org/t/lxc-4-0-lts-has-been-released/7182

# LXCFS
LXCFS is our FUSE filesystem. It's a daemon written in C which acts as
an overlay usable inside containers to query the available host
resources with cgroup constraints applied. It provides a variety of
overlay files for /proc and /sys as well as a fully virtualized view of
cgroupfs for distributions lacking cgroup namespacing support.

The main highlights for this release are (compared with 3.0):

 - CGroup2 support
 - /proc/cpuinfo and /proc/stat based on cpu shares (--enable-cfs option)
 - /proc/loadavg virtualization (--enable-loadavg option)
 - pidfd supported process tracking (--enable-pidfd option)
 - Hardening of the codebase
 - Improved self re-execution logic with failsafe
 - More comprehensive testsuite (run on all architectures for all changes)

4.0.0 release announcement:
https://discuss.linuxcontainers.org/t/lxcfs-4-0-lts-has-been-released/7031
4.0.1 release announcement:
https://discuss.linuxcontainers.org/t/lxcfs-4-0-1-lts-has-been-released/7130

-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] custom style for lxd.readthedocs.io for Firefox

2020-03-16 Thread Stéphane Graber
How does https://linuxcontainers.org/lxd/docs/master/ look?

We're moving away from read the docs in favor of self-hosted, so I'd
prefer fixing anything that needs fixing over there.

Stéphane

On Sun, Mar 15, 2020 at 3:01 PM Mike Wright  wrote:
>
> I find using the online docs at "lxd.readthedocs.io" tedious because the
> tables that show all the configuration options are narrow with the
> "Description" column being particularly narrow.  This causes the
> description to wrap into several lines and reduces the rows of
> configuration options that can be displayed on the screen at one time.
>
> By adding a user style to Firefox you can make the tables as wide as the
> screen.  Call it an accessibility feature for people who don't like
> scrolling.  The two links below show before and after views of a page.
>
>Original https://pasteboard.co/IZfbCz2.png
>Styled   https://pasteboard.co/IZfc9jq.png
>
> 1) start firefox
>
>we have to enable user styles
>go to about:config and search for the following key:
>
>  toolkit.legacyUserProfileCustomizations.stylesheets and make it true
>
> 2) change directory to the profile currently in use
>cd ~/.mozilla/firefox
>cd $(ls -rt |tail -1)
>mkdir chrome (if it doesn't exist)
>cd chrome
>
> 3) add the following lines to userContent.css:
>
>@-moz-document domain("lxd.readthedocs.io"){
> .wy-nav-content { max-width: 100% !important; }
>}
>
> 4) the following advice is provided because, at least on ff-73, having
> the namespace directive causes user styles not to work:
>
>if userContent.css previously existed and it contains a "namespace"
>directive comment it out using css style commenting /* ... */ or
>just delete it (up to you)
>
> 5) check if it works:
>
>restart firefox and go to
>
>  "https://lxd.readthedocs.io/en/latest/configuration/networks;
>
>scroll down until you find a table;  widen your browser window.  you
>should see the table expand with it.
>
> Enjoy,
> Mike Wright
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxd VMs - running ARM VM on amd64 host?

2020-03-14 Thread Stéphane Graber
That's not planned at this time. It would also give us a few headaches when
thinking about placement on a multiple architecture cluster.

Stéphane

On Fri., Mar. 13, 2020, 12:18 p.m. Tomasz Chmielewski, 
wrote:

> There is this image:
>
> | e/arm64 (2 more)   | ea0b0d18e384 | yes| ubuntu 19.10 arm64
> (release) (20200307) | aarch64  | VIRTUAL-MACHINE | 481.19MB
> | Mar 7, 2020 at 12:00am (UTC)  |
>
>
> Let's try to run it on a amd64 host:
>
> $ lxc launch --vm ubuntu:ea0b0d18e384 arm
> Creating arm
> Error: Failed instance creation: Create instance: Requested architecture
> isn't supported by this host
>
>
> Obviously this won't work - the snap doesn't even have a binary for it:
>
> $ ls /snap/lxd/13704/bin/qemu-system-*
> /snap/lxd/13704/bin/qemu-system-x86_64
>
>
> Is it planned to support foreign architecture VMs at some point (i.e.
> ARM VM on amd64 host)?
>
> I understand it would be quite slow, but in general, it works if you
> fiddle with qemu-system-arm.
>
>
> Tomasz Chmielewski
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] How to attach a volume to a container via the REST API

2020-01-15 Thread Stéphane Graber
In this case the "source" key must be set to the name of your custom volume
and the "pool" key set to the name of your pool.

On Wed, Jan 15, 2020 at 2:41 PM alexandros giavaras 
wrote:

> Stephane,
>
> Thanks a lot for your reply. I have already came up with the dict
> specification but thanks a lot for confirming me.
> Most likely I was not so clear to what I was asking. So let's put it this
> way. I have a storage pool named Pool and I have associated
> a volume with it let's call it Volume. Now I am creating the container and
> I want to attach Volume to it. Here is my non-working dict specification
>
> "MyDataVolume:{
> "path":"/opt/data"
> "type":"disk"
> "source":"/Volume"
> }
>
> pylxd.exceptions.LXDAPIException: Create container: Invalid devices:
> Missing source '/Volume' for disk 'MyDataVolume'.
>
> Thanks in advance.
> Best
> Alex
>
> On Wed, Jan 15, 2020 at 5:04 PM Stéphane Graber 
> wrote:
>
>> Same in both cases, such volumes are disk devices attached to the
>> container.
>> At creation time, you'd add such a device in the `devices` dict sent
>> through the POST request.
>> After creation time, you'd fetch the current config using GET, add the
>> device to devices and then PUT it back.
>>
>> On Wed, Jan 15, 2020 at 6:57 AM alexandros giavaras 
>> wrote:
>>
>>> Hi all,
>>>
>>> I was wondering about two related issues.
>>>
>>> The first one is this. I am trying to create a container via the REST
>>> API. I want to attach to it
>>> an already existing storage volume from a storage pool. Could anyone
>>> please indicate how I should fill in the input json because I don't seem
>>> able to find anything useful.
>>>
>>> The second issue is this. I have a container already running and I want
>>> to attach to it
>>> an already existing volume. How do I do this via the REST API?
>>> Thanks in advance.
>>> Best
>>> Alex
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>>
>> --
>> Stéphane
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] How to attach a volume to a container via the REST API

2020-01-15 Thread Stéphane Graber
Same in both cases, such volumes are disk devices attached to the container.
At creation time, you'd add such a device in the `devices` dict sent
through the POST request.
After creation time, you'd fetch the current config using GET, add the
device to devices and then PUT it back.

On Wed, Jan 15, 2020 at 6:57 AM alexandros giavaras 
wrote:

> Hi all,
>
> I was wondering about two related issues.
>
> The first one is this. I am trying to create a container via the REST API.
> I want to attach to it
> an already existing storage volume from a storage pool. Could anyone
> please indicate how I should fill in the input json because I don't seem
> able to find anything useful.
>
> The second issue is this. I have a container already running and I want to
> attach to it
> an already existing volume. How do I do this via the REST API?
> Thanks in advance.
> Best
> Alex
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] ArchLinux container network problems with systemd 244 (systemd 243 works ok)

2020-01-07 Thread Stéphane Graber
Yeah, systemd 244.1 is causing this issue because of a change of detection
of container logic within systemd.

There may be some way to put a systemd override around systemd-networkd to
have that service run in a mount namespace that serves /sys read-only, this
would cause systemd to revert to the old working behavior.

The long term fix is a kernel change so that udevd can behave properly such
that networkd also behaves as expected. We're looking into this now but it
will take some time before that's ready.

On Wed, Jan 1, 2020 at 10:36 AM John  wrote:

> Hello,
>
> Just reporting this problem I'm experiencing with Arch Linux on LXD.
>
> Create container using "images:archlinux/current/amd64" and with a
> network interface connected to a bridge.
>
> Configure /etc/systemd/network/mynetif.network to configure by DHCP:
>
> [Match]
> Name=mynetif
>
> [Network]
> DHCP=ipv4
>
> Start network
>
> # systemctl enable --now systemd-networkd
>
> Observe network stuck pending
>
> # networkctl
> IDX LINK  TYPE OPERATIONAL SETUP
>   1 loloopback carrier unmanaged
> 335 mynetif   etherroutablepending
>
> Confirm systemd version
>
> # systemctl --version
>
> systemd 244 (244.1-1-arch)
> +PAM +AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP
> +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS
> +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
>
> Install systemd 243.78.2-arch
> (download from https://archive.archlinux.org/packages/s/systemd)
>
> (from outside container)
> # lxc file push systemd-243.78-2-x86_64.pkg.tar.xz mycontainer/root
>
> (then inside container)
> # pacman -U systemd-243.78-2-x86_64.pkg.tar.xz
>
> Confirm systemd version
>
> # systemctl --version
> systemd 243 (243.78-2-arch)
> +PAM +AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP
> +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS
> +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
>
> Restart systemd-networkd
>
> # systemctl restart systemd-networkd
>
> Observer network configured successfully
>
> # networkctl
>
> IDX LINK  TYPE OPERATIONAL SETUP
>   1 loloopback carrier unmanaged
> 335 mynetif   etherroutableconfigured
>
> I did look at the system-networkd journal and there was nothing there to
> indicate a problem. If I manually configure the interface (using ip)
> then it works (so the network layer is ok, it's just systemd starting
> things that's broken).
>
> Anyone else observe this?
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] new tag 3.2.2?

2019-12-30 Thread Stéphane Graber
3.2.1 was released due to a bad release of 3.2, but we don't normally do
point releases on non-LTS releases.
We may do a 3.3 still or more likely will get ready with 4.0 early in the
year.

Stéphane

On Mon, Dec 23, 2019 at 2:59 AM Harald Dunkel 
wrote:

> Hi folks,
>
> I see big progress on the lxc master branch every day, but I wonder if
> there is a schedule for a tagged version 3.2.2? Something that could
> be used in production?
>
>
> Regards and best season's greetings
>
> Harri
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Failed to import LXD container tar.gz in unprivileged container (nested container)

2019-11-22 Thread Stéphane Graber
Ah, could be that those images ship with stuff in /dev then.
As I said, your best bet is to modify the tarball and just drop those
entries.

It's only a problem because you're trying to import the tarball inside an
unprivileged container where such device nodes cannot be created.
If the container had been exported from within an unprivileged container or
if it was restored on a host or inside a privileged container, this
wouldn't happen.

On Fri, Nov 22, 2019 at 12:18 PM Chris Han  wrote:

> That container was started from a clean image from the "ubuntu" remote.
> lxc launch ubuntu:18.04 c1
>
> Originally the container was started in a Btrfs storage pool. But after
> that I copy the container to a Dir storage pool and use the later version.
> Will this cause the /dev/xx problem?
>
> On Sat, Nov 23, 2019 at 1:07 AM Stéphane Graber 
> wrote:
>
>> No, switching between privileged and unprivileged wouldn't have cause
>> dev/ to get populated.
>> My guess is that you probably had an image that contained those files
>> when it shouldn't have in the first place.
>>
>> On Fri, Nov 22, 2019 at 11:45 AM Chris Han  wrote:
>>
>>> Originally the container was started as a privileged container
>>> with security.privileged="true". But after that I have removed
>>> the security.privileged configuration and restarted the container. Is this
>>> the root cause of the problem?
>>>
>>> May I know what is the correct steps to change a privileged container to
>>> an unprivileged container?
>>>
>>> Thanks for your reply.
>>>
>>> On Sat, Nov 23, 2019 at 12:28 AM Stéphane Graber 
>>> wrote:
>>>
>>>> Hmm, not sure why you have those devices in this container in the first
>>>> place, normally /dev is left empty and mounted as tmpfs in the container.
>>>> You could likely just edit the tarball to remove the content of dev/
>>>> and then import it just fine.
>>>>
>>>> On Fri, Nov 22, 2019 at 2:19 AM Chris Han 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have an unprivileged LXD container, c1, running in a physical host.
>>>>> I have exported this container to tar.gz:
>>>>>
>>>>> lxc export c1-unprivileged c1-unprivileged.tar.gz
>>>>>
>>>>>
>>>>> I have created another unprivileged LXD container, c2, with settings
>>>>> for nested containers. Inside the c2 container, I am able to launch a
>>>>> nested unprivileged LXD container, c3. The c3 container is working fine.
>>>>>
>>>>> lxc launch ubuntu:18.04 c3-unprivileged-nested
>>>>>
>>>>>
>>>>> However, when I try to import the c1 tar.gz file inside c2 to create a
>>>>> nested container, it shows the following error message:
>>>>>
>>>>> lxc import c1-unprivileged.tar.gz
>>>>>
>>>>> tar: rootfs/dev/zero: Cannot mknod: Operation not permitted
>>>>> tar: rootfs/dev/random: Cannot mknod: Operation not permitted
>>>>> tar: rootfs/dev/tty: Cannot mknod: Operation not permitted
>>>>> tar: rootfs/dev/null: Cannot mknod: Operation not permitted
>>>>> tar: rootfs/dev/full: Cannot mknod: Operation not permitted
>>>>> tar: rootfs/dev/urandom: Cannot mknod: Operation not permitted
>>>>>
>>>>> I am able to import the c1 tar.gz file in a physical host, but unable
>>>>> to import it in an unprivileged container (to create a nested container).
>>>>> The LXD network and storage settings in the physical host and the c2
>>>>> container are exactly the same.
>>>>>
>>>>> How to import the c1 tar.gz in the c2 unprivileged container?
>>>>>
>>>>> ___
>>>>> lxc-users mailing list
>>>>> lxc-users@lists.linuxcontainers.org
>>>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>>>
>>>>
>>>>
>>>> --
>>>> Stéphane
>>>> ___
>>>> lxc-users mailing list
>>>> lxc-users@lists.linuxcontainers.org
>>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>>
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>>
>> --
>> Stéphane
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Failed to import LXD container tar.gz in unprivileged container (nested container)

2019-11-22 Thread Stéphane Graber
No, switching between privileged and unprivileged wouldn't have cause dev/
to get populated.
My guess is that you probably had an image that contained those files when
it shouldn't have in the first place.

On Fri, Nov 22, 2019 at 11:45 AM Chris Han  wrote:

> Originally the container was started as a privileged container
> with security.privileged="true". But after that I have removed
> the security.privileged configuration and restarted the container. Is this
> the root cause of the problem?
>
> May I know what is the correct steps to change a privileged container to
> an unprivileged container?
>
> Thanks for your reply.
>
> On Sat, Nov 23, 2019 at 12:28 AM Stéphane Graber 
> wrote:
>
>> Hmm, not sure why you have those devices in this container in the first
>> place, normally /dev is left empty and mounted as tmpfs in the container.
>> You could likely just edit the tarball to remove the content of dev/ and
>> then import it just fine.
>>
>> On Fri, Nov 22, 2019 at 2:19 AM Chris Han  wrote:
>>
>>> Hi,
>>>
>>> I have an unprivileged LXD container, c1, running in a physical host. I
>>> have exported this container to tar.gz:
>>>
>>> lxc export c1-unprivileged c1-unprivileged.tar.gz
>>>
>>>
>>> I have created another unprivileged LXD container, c2, with settings for
>>> nested containers. Inside the c2 container, I am able to launch a
>>> nested unprivileged LXD container, c3. The c3 container is working fine.
>>>
>>> lxc launch ubuntu:18.04 c3-unprivileged-nested
>>>
>>>
>>> However, when I try to import the c1 tar.gz file inside c2 to create a
>>> nested container, it shows the following error message:
>>>
>>> lxc import c1-unprivileged.tar.gz
>>>
>>> tar: rootfs/dev/zero: Cannot mknod: Operation not permitted
>>> tar: rootfs/dev/random: Cannot mknod: Operation not permitted
>>> tar: rootfs/dev/tty: Cannot mknod: Operation not permitted
>>> tar: rootfs/dev/null: Cannot mknod: Operation not permitted
>>> tar: rootfs/dev/full: Cannot mknod: Operation not permitted
>>> tar: rootfs/dev/urandom: Cannot mknod: Operation not permitted
>>>
>>> I am able to import the c1 tar.gz file in a physical host, but unable to
>>> import it in an unprivileged container (to create a nested container). The
>>> LXD network and storage settings in the physical host and the c2 container
>>> are exactly the same.
>>>
>>> How to import the c1 tar.gz in the c2 unprivileged container?
>>>
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>>
>> --
>> Stéphane
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Failed to import LXD container tar.gz in unprivileged container (nested container)

2019-11-22 Thread Stéphane Graber
Hmm, not sure why you have those devices in this container in the first
place, normally /dev is left empty and mounted as tmpfs in the container.
You could likely just edit the tarball to remove the content of dev/ and
then import it just fine.

On Fri, Nov 22, 2019 at 2:19 AM Chris Han  wrote:

> Hi,
>
> I have an unprivileged LXD container, c1, running in a physical host. I
> have exported this container to tar.gz:
>
> lxc export c1-unprivileged c1-unprivileged.tar.gz
>
>
> I have created another unprivileged LXD container, c2, with settings for
> nested containers. Inside the c2 container, I am able to launch a
> nested unprivileged LXD container, c3. The c3 container is working fine.
>
> lxc launch ubuntu:18.04 c3-unprivileged-nested
>
>
> However, when I try to import the c1 tar.gz file inside c2 to create a
> nested container, it shows the following error message:
>
> lxc import c1-unprivileged.tar.gz
>
> tar: rootfs/dev/zero: Cannot mknod: Operation not permitted
> tar: rootfs/dev/random: Cannot mknod: Operation not permitted
> tar: rootfs/dev/tty: Cannot mknod: Operation not permitted
> tar: rootfs/dev/null: Cannot mknod: Operation not permitted
> tar: rootfs/dev/full: Cannot mknod: Operation not permitted
> tar: rootfs/dev/urandom: Cannot mknod: Operation not permitted
>
> I am able to import the c1 tar.gz file in a physical host, but unable to
> import it in an unprivileged container (to create a nested container). The
> LXD network and storage settings in the physical host and the c2 container
> are exactly the same.
>
> How to import the c1 tar.gz in the c2 unprivileged container?
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] 10min lxd shutdown penalty

2019-10-31 Thread Stéphane Graber
https://github.com/lxc/lxd-pkg-snap/issues/39

On Thu, Oct 31, 2019 at 10:27 AM Harald Dunkel 
wrote:

> Hi folks,
>
> apparently lxd doesn't properly terminate at shutdown/reboot time,
> even though there are no containers installed. The shutdown
> procedure is delayed for 10 minutes. Last words:
>
> A stop job is running for Service for snap application lxd.daemon
>
> This is *highly* painful.
>
> Platform is Debian 10. Is there a similar delay on Ubuntu 19.10
> or others? Any reasonable way to avoid the 10 minute penalty for
> rebooting? Reducing the timeout would be considered as cheating.
> ;-)
>
> Of course I found https://github.com/lxc/lxd/issues/4277, but the
> issue was not resolved. Today snap is the only supported way to
> install lxd binaries, AFAIU, so I would highly appreciate any helpful
> comment.
>
> Harri
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] suspicious output for "lxc profile device add --help"

2019-10-24 Thread Stéphane Graber
That's caused by us using the same functions to handle both config and
profiles, avoiding code duplication.
The downside was that the Examples text was also shared.

I've sent a PR which duplicates the text with the right name:
https://github.com/lxc/lxd/pull/6349

On Thu, Oct 24, 2019 at 8:30 AM Harald Dunkel 
wrote:

> Hi folks,
>
> this looks weird:
>
>
> # lxc profile device add --help
> Description:
>Add devices to containers or profiles
>
> Usage:
>lxc profile device add [:]  
> [key=value...] [flags]
>
> Examples:
>lxc config device add [:]container1  disk
> source=/share/c1 path=opt
>Will mount the host's /share/c1 onto /opt in the container.
> :
> :
>
>
> Note the "lxc config" instead of "lxc profile". Probably copy
> and then forgotten.
>
> lxd is version 3.18
>
>
> Regards
> Harri
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxc config edit $container - how to change the editor (defaults to vim.tiny)?

2019-10-13 Thread Stéphane Graber
Oh and I forgot to mention, you can also use the longer approach of doing:

 lxc config show $container > config.yaml
  config.yaml
 lxc config edit $container < config.yaml

On Sun, Oct 13, 2019 at 12:04 PM Stéphane Graber 
wrote:

> Because snaps must be self-contained, we actually have to ship the text
> editors inside the snap...
> Shipping the full build of vim isn't practical as it's linking against
> python3, pulling all of python along with it.
>
> So right now the snap includes both vim.tiny and nano, you can choose
> which you want through the EDITOR env variable.
> We've also very recently fixed support for .vimrc so if you want to try to
> configure vim.tiny to be more usable, you can write a vimrc for it in
> ~/snap/lxd/current/.vimrc
>
> Stéphane
>
> On Sun, Oct 13, 2019 at 11:13 AM Tomasz Chmielewski 
> wrote:
>
>> In the last week or two, my lxd servers installed from snap start
>> "vim.tiny" when I use "lxc config edit $container".
>>
>> It used to use vim before (I think - vim.tiny has arrow keys messed up,
>> and they used to work as expected before).
>>
>> How to change the editor to something else, i.e. proper/full vim?
>>
>>
>> Tomasz Chmielewski
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>
>
> --
> Stéphane
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxc config edit $container - how to change the editor (defaults to vim.tiny)?

2019-10-13 Thread Stéphane Graber
Because snaps must be self-contained, we actually have to ship the text
editors inside the snap...
Shipping the full build of vim isn't practical as it's linking against
python3, pulling all of python along with it.

So right now the snap includes both vim.tiny and nano, you can choose which
you want through the EDITOR env variable.
We've also very recently fixed support for .vimrc so if you want to try to
configure vim.tiny to be more usable, you can write a vimrc for it in
~/snap/lxd/current/.vimrc

Stéphane

On Sun, Oct 13, 2019 at 11:13 AM Tomasz Chmielewski  wrote:

> In the last week or two, my lxd servers installed from snap start
> "vim.tiny" when I use "lxc config edit $container".
>
> It used to use vim before (I think - vim.tiny has arrow keys messed up,
> and they used to work as expected before).
>
> How to change the editor to something else, i.e. proper/full vim?
>
>
> Tomasz Chmielewski
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Error: websocket: close 1006 (abnormal closure): unexpected EOF

2019-09-20 Thread Stéphane Graber
No.

A refresh needs to restart the LXD API server (lxd daemon), the "lxc exec"
session is done through websockets served by that API server.

In theory we could have LXD detect existing exec sessions and hold the
restart, but that's not a great option either as that would effectively
cause clusters to hang and for standalone systems would mean that a single
background exec session would hold security and bugfix updates which
doesn't seem good either.

One thing I'd like to improve is the error message you get when it happens
and ideally have the API send a control message a few seconds before the
session goes down.
It's probably reasonable for us to hold the restart for 10s and send that
notification which would allow some API clients to cleanly interrupt what
they're doing, the issue with this is that there is no good way to show
that notice to the user as we don't want to mess too much with what you get
on screen during an exec session.

Anyway, that's probably as much as we're likely to be doing about this, do
notification and improve errors so our API clients can be smarter about
this.

Stéphane

On Fri, Sep 20, 2019 at 8:31 AM Tomasz Chmielewski  wrote:

> On 2019-09-20 12:23, Fajar A. Nugraha wrote:
>
> > If you're asking 'how to keep "lxc exec session running when lxd is
> > restarted", then it's not possible.
>
> Yes, I guess that's what I'm asking about!
>
> Is there a chance it will be possible in the future? With some changes
> to lxd restart model maybe? Or how lxc exec sessions are handled?
>
>
> Tomasz Chmielewski
> https://lxadm.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>


-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxcfs segfaults since around 2019-07-23, containers malfunction

2019-07-23 Thread Stéphane Graber
This has now been resolved in the stable snap.

Affected users should:
 - snap refresh lxd
 - ps aux | grep lxcfs (to confirm lxcfs is now running)
 - lxc restart 
 - If a container restart isn't an option, then "grep lxcfs
/proc/mounts" inside the container and proceed to "umount" all of
those

More details on this issue can be found here:
https://github.com/lxc/lxcfs/issues/295

On Tue, Jul 23, 2019 at 9:44 AM Christian Brauner  wrote:
>
> On Tue, Jul 23, 2019 at 01:31:12PM +0200, Tomasz Chmielewski wrote:
> > Since around 2019-07-23, lxcfs segfaults randomly on Ubuntu 18.04 servers
> > with LXD from snap:
> >
> > lxcfs[1424]: segfault at 0 ip 7f518f5e4326 sp 7f519da1f9a0 error 4
> > in liblxcfs.so[7f518f5d8000+1a000]
> >
> >
> > As a result, containers malfunction. If a container is stopped, then started
> > again - everything works well, however - after a few hours, it happens
> > again.
> >
> > root@uni01:~# free
> > Error: /proc must be mounted
> >   To mount /proc at boot you need an /etc/fstab line like:
> >   proc   /proc   procdefaults
> >   In the meantime, run "mount proc /proc -t proc"
> >
> > root@uni01:~# uptime
> > Error: /proc must be mounted
> >   To mount /proc at boot you need an /etc/fstab line like:
> >   proc   /proc   procdefaults
> >   In the meantime, run "mount proc /proc -t proc"
> >
> >
> > root@uni01:~# ls /proc
> > ls: cannot access '/proc/stat': Transport endpoint is not connected
> > ls: cannot access '/proc/swaps': Transport endpoint is not connected
> > ls: cannot access '/proc/uptime': Transport endpoint is not connected
> > ls: cannot access '/proc/cpuinfo': Transport endpoint is not connected
> > ls: cannot access '/proc/meminfo': Transport endpoint is not connected
> > ls: cannot access '/proc/diskstats': Transport endpoint is not connected
> > (...)
> >
> > Is it a known issue? I'm observing it on around 10 servers.
>
> I've had that issue reported today. Can you get a coredump for this so I
> can see where lxcfs is segfaulting?
>
> Christian
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] unable to set security.protection.delete on a new LXD server

2019-07-02 Thread Stéphane Graber
Do you maybe have both the deb and snap installed on that system and
are in fact interacting with the deb rather than snap?

dpkg -l | grep lxd

And if you do have both, then run `lxd.migrate` to transition the data
over to the snap.

On Tue, Jul 2, 2019 at 9:52 PM Tomasz Chmielewski  wrote:
>
> Just installed lxd from snap on a Ubuntu 18.04 server and launched the
> first container:
>
> # snap list
> Name  VersionRevTracking  Publisher   Notes
> amazon-ssm-agent  2.3.612.0  1335   stable/…  aws✓classic
> core  16-2.39.3  7270   stablecanonical✓  core
> lxd   3.14   11098  stablecanonical✓  -
>
>
> # lxc launch ubuntu:18.04 terraform
> Creating terraform
> Starting terraform
>
>
> However, I'm not able to set security.protection.delete for containers
> created here:
>
> # lxc config set terraform security.protection.delete true
> Error: Invalid config: Unknown configuration key:
> security.protection.delete
>
>
> Also doesn't work when I try to set it via "lxc config edit".
>
> This works perfectly on other LXD servers, so I'm a bit puzzled why it
> won't work here?
>
>
> Tomasz Chmielewski
> https://lxadm.com
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXC 1.0 is now End of Life

2019-06-28 Thread Stéphane Graber
On Thu, Jun 27, 2019 at 12:17 AM Dmitry Melekhov  wrote:
>
> 26.06.2019 20:19, Stéphane Graber пишет:
>
> Hello,
>
> After over 5 years of support, it's finally time for LXC 1.0 to retire.
>
>
> Well, really 1 is not supported for long time, when I asked to solve problem- 
> I got the same answer-
>
> upgrade to 2 or 3 :-)

As mentioned in the post I linked to below, we do heavy backporting of
fixes on the current LTS (typically all fixes), previous LTS releases
only get security fixes and major bugfixes (think data loss or
features breaking over time) as we're having a hard enough time
backporting those hundred of commits to just one LTS branch :)

>
>
>
> LXC 1.0 was released on the 20th of February 2014 and received numerous
> bugfix and security updates ever since.
>
> Remaining users are urged to upgrade to 2.0 LTS or 3.0 LTS.
>
> More details at: 
> https://discuss.linuxcontainers.org/t/lxc-1-0-end-of-life-announcement/5111
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] LXC 1.0 is now End of Life

2019-06-26 Thread Stéphane Graber
Hello,

After over 5 years of support, it's finally time for LXC 1.0 to retire.

LXC 1.0 was released on the 20th of February 2014 and received numerous
bugfix and security updates ever since.

Remaining users are urged to upgrade to 2.0 LTS or 3.0 LTS.

More details at: 
https://discuss.linuxcontainers.org/t/lxc-1-0-end-of-life-announcement/5111

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] LXD, LXC and LXCFS 3.0.4 about to be releaed (call for testing)

2019-06-20 Thread Stéphane Graber
Hello,

We'll be tagging 3.0.4 of all three projects later tomorrow.
So far all our branches are up to date, they should include all the
fixes everyone is waiting for and they're passing our automated CI and
minimal manual testing.

If you have particular fixes you want to make sure are going to be in
3.0.4 or have some spare time, there are details on where to find the
various branches and test LXD snap using all 3 projects.

https://discuss.linuxcontainers.org/t/upcoming-3-0-4-releae-of-lxd-lxc-and-lxcfs/5071

Thanks!

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] limits.memory - possible to set per group of containers?

2019-06-17 Thread Stéphane Graber
On Tue, Jun 18, 2019 at 09:47:19AM +0900, Tomasz Chmielewski wrote:
> Let's say I have a host with 32 GB RAM.
> 
> To make sure the host is not affected by any weird memory consumption
> patterns, I've set the following in the container:
> 
>   limits.memory: 29GB
> 
> This works quite well - where previously, several processes with high memory
> usage, forking rapidly (a forkbomb to test, but also i.e. a supervisor in
> normal usage) running in the container could make the host very slow or even
> unreachable - with the above setting, everything (on the host) is just
> smooth no matter what the container does.
> 
> However, that's just with one container.
> 
> With two (or more) containers having "limits.memory: 29GB" set - it's easy
> for each of them to consume i.e. 20 GB, leading to host unavailability.
> 
> Is it possible to set a global, or per-container group "limits.memory:
> 29GB"?
> 
> For example, if I add "MemoryMax=29G" to
> /etc/systemd/system/snap.lxd.daemon.service - would I achieve a desired
> effect?
> 
> 
> 
> Tomasz Chmielewski
> https://lxadm.com

So we have plans to introduce project quotas which will allow placing
such restrictions in a clean way through LXD.

Until then you can manually tweak /sys/fs/cgroup/memory/lxc or
/sys/fs/cgroup/memory/lxc.payload (depending on version of liblxc) as
all containers reside under there and limits are hierarchical.

It's pretty similar to what systemd would attempt to do except that
liblxc/lxd bypass systemd's expected cgroup so placing the limit through
systemd wouldn't work.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-25 Thread Stéphane Graber
So the missing ones there's really nothing you can do about, though
normally that shouldn't cause sysctl application to fail as it's
somewhat common for systems to have a different set of sysctls.

In this case, it's because the network namespace is filtering some of them.

If your container doesn't need isolated networking, in theory using the
host namespace for the network would cause those to show back up, but
note that sharing network namespace with the host may have some very
weird side effects (such as systemd in the container interacting with
the host's).

On Sat, May 25, 2019 at 09:36:25PM -0400, Saint Michael wrote:
> some things do not work inside the container
>  sysctl -p
> fs.aio-max-nr = 1048576
> fs.aio-max-nr = 655360
> fs.inotify.max_user_instances = 8192
> kernel.pty.max = 16120
> kernel.randomize_va_space = 1
> kernel.shmall = 4294967296
> kernel.shmmax = 990896795648
> net.ipv4.conf.all.arp_announce = 2
> net.ipv4.conf.all.arp_filter = 1
> net.ipv4.conf.all.arp_ignore = 1
> net.ipv4.conf.all.rp_filter = 1
> net.ipv4.conf.default.accept_source_route = 0
> net.ipv4.conf.default.arp_filter = 1
> net.ipv4.conf.default.rp_filter = 1
> net.ipv4.ip_forward = 1
> net.ipv4.ip_local_port_range = 5000 65535
> net.ipv4.ip_nonlocal_bind = 0
> net.ipv4.ip_no_pmtu_disc = 0
> net.ipv4.tcp_tw_reuse = 1
> vm.hugepages_treat_as_movable = 0
> vm.hugetlb_shm_group = 128
> vm.nr_hugepages = 250
> vm.nr_hugepages_mempolicy = 250
> vm.overcommit_memory = 0
> vm.swappiness = 0
> vm.vfs_cache_pressure = 150
> vm.dirty_ratio = 10
> vm.dirty_background_ratio = 5
> kernel.hung_task_timeout_secs = 0
> sysctl: cannot stat /proc/sys/net/core/rmem_max: No such file or directory
> sysctl: cannot stat /proc/sys/net/core/wmem_max: No such file or directory
> sysctl: cannot stat /proc/sys/net/core/rmem_default: No such file or
> directory
> sysctl: cannot stat /proc/sys/net/core/wmem_default: No such file or
> directory
> net.ipv4.tcp_rmem = 10240 87380 10485760
> net.ipv4.tcp_wmem = 10240 87380 10485760
> sysctl: cannot stat /proc/sys/net/ipv4/udp_rmem_min: No such file or
> directory
> sysctl: cannot stat /proc/sys/net/ipv4/udp_wmem_min: No such file or
> directory
> sysctl: cannot stat /proc/sys/net/ipv4/udp_mem: No such file or directory
> sysctl: cannot stat /proc/sys/net/ipv4/tcp_mem: No such file or directory
> sysctl: cannot stat /proc/sys/net/core/optmem_max: No such file or directory
> net.core.somaxconn = 65535
> sysctl: cannot stat /proc/sys/net/core/netdev_max_backlog: No such file or
> directory
> fs.file-max = 500000
> 
> 
> On Sat, May 25, 2019 at 9:28 PM Saint Michael  wrote:
> 
> > Thanks
> > Finally some help!
> >
> > On Sat, May 25, 2019 at 9:07 PM Stéphane Graber 
> > wrote:
> >
> >> On Sat, May 25, 2019 at 02:02:59PM -0400, Saint Michael wrote:
> >> > Thanks to all. I am sorry I touched a heated point. For me using
> >> > hard-virtualization for Linux apps is dementia. It should be kept only
> >> for
> >> > Windows VMs.
> >> > For me, the single point of using LXC is to be able to redeploy a
> >> complex
> >> > app from host to host in a few minutes. I use one-host->one-Container.
> >> So
> >> > what is the issue of giving all power to the containers?
> >> >
> >> > On Sat, May 25, 2019 at 1:56 PM jjs - mainphrame 
> >> wrote:
> >> >
> >> > > Given the developers stance, perhaps a temporary workaround is in
> >> order,
> >> > > e.g. ssh-key root login to physical host e.g. "ssh  sysctl
> >> > > key=value..."
> >> > >
> >> > > Jake
> >> > >
> >> > > On Mon, May 20, 2019 at 9:25 AM Saint Michael 
> >> wrote:
> >> > >
> >> > >> I am trying to use sysctl -p inside an LXC container and it says
> >> > >> read only file system
> >> > >> how do I give my container all possible rights?
> >> > >> Right now I have
> >> > >>
> >> > >> lxc.mount.auto = cgroup:mixed
> >> > >> lxc.tty.max = 10
> >> > >> lxc.pty.max = 1024
> >> > >> lxc.cgroup.devices.allow = c 1:3 rwm
> >> > >> lxc.cgroup.devices.allow = c 1:5 rwm
> >> > >> lxc.cgroup.devices.allow = c 5:1 rwm
> >> > >> lxc.cgroup.devices.allow = c 5:0 rwm
> >> > >> lxc.cgroup.devices.allow = c 4:0 rwm
> >> > >> lxc.cgroup.devices.allow = c 4:1 rwm
> >> > >> lxc.cgroup.devices.allow = c 1:9 rwm
> &g

Re: [lxc-users] not allowed to change kernel parameters inside container

2019-05-25 Thread Stéphane Graber
On Sat, May 25, 2019 at 02:02:59PM -0400, Saint Michael wrote:
> Thanks to all. I am sorry I touched a heated point. For me using
> hard-virtualization for Linux apps is dementia. It should be kept only for
> Windows VMs.
> For me, the single point of using LXC is to be able to redeploy a complex
> app from host to host in a few minutes. I use one-host->one-Container. So
> what is the issue of giving all power to the containers?
> 
> On Sat, May 25, 2019 at 1:56 PM jjs - mainphrame  wrote:
> 
> > Given the developers stance, perhaps a temporary workaround is in order,
> > e.g. ssh-key root login to physical host e.g. "ssh  sysctl
> > key=value..."
> >
> > Jake
> >
> > On Mon, May 20, 2019 at 9:25 AM Saint Michael  wrote:
> >
> >> I am trying to use sysctl -p inside an LXC container and it says
> >> read only file system
> >> how do I give my container all possible rights?
> >> Right now I have
> >>
> >> lxc.mount.auto = cgroup:mixed
> >> lxc.tty.max = 10
> >> lxc.pty.max = 1024
> >> lxc.cgroup.devices.allow = c 1:3 rwm
> >> lxc.cgroup.devices.allow = c 1:5 rwm
> >> lxc.cgroup.devices.allow = c 5:1 rwm
> >> lxc.cgroup.devices.allow = c 5:0 rwm
> >> lxc.cgroup.devices.allow = c 4:0 rwm
> >> lxc.cgroup.devices.allow = c 4:1 rwm
> >> lxc.cgroup.devices.allow = c 1:9 rwm
> >> lxc.cgroup.devices.allow = c 1:8 rwm
> >> lxc.cgroup.devices.allow = c 136:* rwm
> >> lxc.cgroup.devices.allow = c 5:2 rwm
> >> lxc.cgroup.devices.allow = c 254:0 rwm
> >> lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
> >> lxc.cgroup.devices.allow = b 7:* rwm# loop*
> >> lxc.cgroup.devices.allow = c 10:229 rwm #fuse
> >> lxc.cgroup.devices.allow = c 10:200 rwm #docker
> >> #lxc.cgroup.memory.limit_in_bytes = 92536870910
> >> lxc.apparmor.profile= unconfined
> >> lxc.cgroup.devices.allow= a
> >> lxc.cap.drop=
> >> lxc.cgroup.devices.deny=
> >> #lxc.mount.auto= proc:rw sys:ro cgroup:ro
> >> lxc.autodev= 1

Set:

lxc.mount.auto=
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.apparmor.profile=unconfined


This for a privileged container should allow all writes through /proc and /sys.
As some pointed out, not usually a good idea for a container, but given
it's the only thing on your system, that may be fine.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD Cluster & Ceph storage: "Config key ... may not be used as node-specific key"

2019-05-15 Thread Stéphane Graber
On Wed, May 15, 2019 at 03:00:34PM -0700, Robert Johnson wrote:
> I seem to be stuck in a catch-22 with adding a ceph storage pool to an
> existing LXD cluster.
> 
> When attempting to add a ceph storage pool, I am prompted to specify the
> target node, but, when doing so, the config keys are not allowed. Once a
> ceph pool is created, it's not possible to add config keys. Is there
> something that I'm missing to the process of adding a ceph pool to a LXD
> cluster?
> 
> The documentation and examples that I have found all assume a stand-alone
> LXD instance.
> 
> 
> Example commands that I am trying to accomplish this:
> 
> rob@stack1b:~$ lxd --version
> 3.13
> 
> rob@stack1b:~$ lxc cluster list
> +-+-+--++---+
> |  NAME   | URL | DATABASE | STATE  | 
> MESSAGE  |
> +-+-+--++---+
> | stack1a | https://[]:8443 | YES  | ONLINE | fully
> operational |
> +-+-+--++---+
> | stack1b | https://[]:8443 | YES  | ONLINE | fully
> operational |
> +-+-+--++---+
> | stack1c | https://[]:8443 | YES  | ONLINE | fully
> operational |
> +-+-+--++---+
> 
> rob@stack1b:~$ lxc storage list
> +---+-++-+-+
> | NAME  | DESCRIPTION | DRIVER |  STATE  | USED BY |
> +---+-++-+-+
> | local | | zfs    | CREATED | 10  |
> +---+-++-+-+
> 
> rob@stack1b:~$ lxc storage create lxd-slow ceph ceph.osd.pool_name=lxd-slow
> ceph.user.name=user
> Error: Pool not pending on any node (use --target  first)
> 
> rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
> ceph.osd.pool_name=lxd-slow ceph.user.name=user
> Error: Config key 'ceph.osd.pool_name' may not be used as node-specific key
> 
> rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
> ceph.user.name=user
> Error: Config key 'ceph.user.name' may not be used as node-specific key
> 
> rob@stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
> Storage pool lxd-slow pending on member stack1b
> 
> rob@stack1b:~$ lxc storage list
> +--+-++-+-+
> |   NAME   | DESCRIPTION | DRIVER |  STATE  | USED BY |
> +--+-++-+-+
> | local    | | zfs    | CREATED | 10  |
> +--+-++-+-+
> | lxd-slow | | ceph   | PENDING | 0   |
> +--+-++-+-+
> 
> rob@stack1b:~$ lxc storage set lxd-slow ceph.osd.pool_name lxd-slow
> Error: failed to notify peer []:8443: The
> [ceph.osd.pool_name] properties cannot be changed for "ceph" storage pools
> 
> rob@stack1b:~$ lxc storage set lxd-slow ceph.user.name user
> Error: failed to notify peer []:8443: The
> [ceph.user.name] properties cannot be changed for "ceph" storage pools


lxc storage create lxd-slow ceph --target stack1a
lxc storage create lxd-slow ceph --target stack1b
lxc storage create lxd-slow ceph --target stack1c
lxc storage create lxd-slow ceph ceph.osd.pool_name=lxd-slow ceph.user.name=user

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Moving files to a guest fs from the host

2019-04-04 Thread Stéphane Graber
That error is pointing to a problem, the directory should be empty when
this isn't mounted, if it's got stuff, then that stuff is not actually
in the container rootfs.

On Thu, Apr 04, 2019 at 05:30:16PM -0400, Brandon Whaley wrote:
> Thank you for taking a look.  I was able to rsync and confirm that
> everything worked as expected if I leave the instance running in
> privileged mode during the rsync.  I was wondering if you could
> elaborate on the zfs mount option.  When I try to mount it via `zfs
> mount default/containers/instance-0019` I get the following error:
> 
> root@atl-comp1:~# zfs mount default/containers/instance-0019
> cannot mount 
> '/var/lib/lxd/storage-pools/default/containers/instance-0019':
> directory is not empty
> 
> I would of course prefer to not have the guest running during this xfer.
> 
> On Thu, Apr 4, 2019 at 12:50 PM Stéphane Graber  wrote:
> >
> > LXD only mounts the ZFS datasets when the container is started, so you 
> > should:
> >  - Set security.privileged to true
> >  - Start the container (or alternatively manually "zfs mount" it)
> >  - Rsync
> >  - Stop or unmount the container
> >  - Unset security.privileged
> >  - Start it
> >
> > On Thu, Apr 04, 2019 at 10:42:38AM -0400, Brandon Whaley wrote:
> > > I'm in the middle of migrating some users from VZ to LXC/LXD with ZFS
> > > backed guest fs.  I'm using rsync with --numeric-ids to copy the files
> > > with the correct uid/gid to the container's private area.  It was
> > > suggested to me that I could get the uid/gid remapping done by making
> > > the destination container privileged and starting/stopping it before
> > > the xfer, which does appear to work from the host side.  Unfortunately
> > > after setting the container back to unprivileged mode and starting it,
> > > the new files are gone and the fs is back to its pre-rsync state.
> > > Setting the container to privileged mode again shows that the files
> > > are still there, just being hidden by some overlay.
> > >
> > > I'm wondering if there is a mechanism to mount a non-uid/gid remapped
> > > guest fs that will not end up being overridden when the remapping is
> > > done.
> > >
> > > root@atl-comp1:~# lxc config set instance-0019 security.privileged 
> > > false
> > > root@atl-comp1:~# lxc start instance-0019
> > > root@atl-comp1:~# ls -hal
> > > /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> > > ls: cannot access
> > > '/var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release':
> > > No such file or directory
> > > root@atl-comp1:~# lxc stop instance-0019
> > > root@atl-comp1:~# lxc config set instance-0019 security.privileged 
> > > true
> > > root@atl-comp1:~# lxc start instance-0019
> > > root@atl-comp1:~# ls -hal
> > > /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> > > ls: cannot access
> > > '/var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release':
> > > No such file or directory
> > > root@atl-comp1:~# lxc stop instance-0019
> > > root@atl-comp1:~# ls -hal
> > > /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> > > lrwxrwxrwx 1 root root 14 Apr  3 12:53
> > > /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> > > -> centos-release
> > > ___
> > > lxc-users mailing list
> > > lxc-users@lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Moving files to a guest fs from the host

2019-04-04 Thread Stéphane Graber
LXD only mounts the ZFS datasets when the container is started, so you should:
 - Set security.privileged to true
 - Start the container (or alternatively manually "zfs mount" it)
 - Rsync
 - Stop or unmount the container
 - Unset security.privileged
 - Start it

On Thu, Apr 04, 2019 at 10:42:38AM -0400, Brandon Whaley wrote:
> I'm in the middle of migrating some users from VZ to LXC/LXD with ZFS
> backed guest fs.  I'm using rsync with --numeric-ids to copy the files
> with the correct uid/gid to the container's private area.  It was
> suggested to me that I could get the uid/gid remapping done by making
> the destination container privileged and starting/stopping it before
> the xfer, which does appear to work from the host side.  Unfortunately
> after setting the container back to unprivileged mode and starting it,
> the new files are gone and the fs is back to its pre-rsync state.
> Setting the container to privileged mode again shows that the files
> are still there, just being hidden by some overlay.
> 
> I'm wondering if there is a mechanism to mount a non-uid/gid remapped
> guest fs that will not end up being overridden when the remapping is
> done.
> 
> root@atl-comp1:~# lxc config set instance-0019 security.privileged false
> root@atl-comp1:~# lxc start instance-0019
> root@atl-comp1:~# ls -hal
> /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> ls: cannot access
> '/var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release':
> No such file or directory
> root@atl-comp1:~# lxc stop instance-0019
> root@atl-comp1:~# lxc config set instance-0019 security.privileged true
> root@atl-comp1:~# lxc start instance-0019
> root@atl-comp1:~# ls -hal
> /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> ls: cannot access
> '/var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release':
> No such file or directory
> root@atl-comp1:~# lxc stop instance-0019
> root@atl-comp1:~# ls -hal
> /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> lrwxrwxrwx 1 root root 14 Apr  3 12:53
> /var/lib/lxd/storage-pools/default/containers/instance-0019/rootfs/etc/redhat-release
> -> centos-release
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Fwd: ciab errors in update/upgrade of nested container - these are the packages

2019-03-15 Thread Stéphane Graber
 to reload daemon: Access denied*
> dpkg: error processing package udev (--configure):
>  installed udev package post-installation script subprocess was interrupted
> Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
> Processing triggers for dbus (1.12.2-1ubuntu1) ...
> *Failed to open connection to "system" message bus: Failed to query
> AppArmor policy: Permission denied*
> Setting up libxcb1:amd64 (1.13-2~ubuntu18.04) ...
> Setting up libpam-systemd:amd64 (237-3ubuntu10.15) ...
> Setting up python3-apport (2.20.9-0ubuntu7.6) ...
> dpkg: error processing package apport (--configure):
>  package is in a very bad inconsistent state; you should
>  reinstall it before attempting configuration
> Processing triggers for libc-bin (2.27-3ubuntu1) ...
> *Errors were encountered while processing:*
> * udev*
> * apport*
> 
> *I went back and tried to reinstall apport...*
> 
> # apt install --reinstall apport
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> The following package was automatically installed and is no longer required:
>   libfreetype6
> Use 'apt autoremove' to remove it.
> Suggested packages:
>   apport-gtk | apport-kde
> The following packages will be upgraded:
>   apport
> 1 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
> 2 not fully installed or removed.
> Need to get 0 B/124 kB of archives.
> After this operation, 0 B of additional disk space will be used.
> (Reading database ... 28595 files and directories currently installed.)
> Preparing to unpack .../apport_2.20.9-0ubuntu7.6_all.deb ...
> *Failed to retrieve unit state: Access denied*
> *invoke-rc.d: could not determine current runlevel*
> *Failed to reload daemon: Access denied*
> 
> ==
> 
> Does anyone have any idea what might be causing this?
> Again this is happening on AWS and on a local KVM Ubuntu VM.

Sounds like AppArmor messing with things in this case.
Does enabling nesting for your nested container help somehow (the
generated rules will change a bit as a result of that)?

I'm pretty sure that if you look at `dmesg` you'll see some denials
related to those package updates. I suspect the main difference between
the two containers, other than the nested flag is that the parent
container has its own apparmor namespace whereas the child has to run
under a single apparmor profile as apparmor namespaces do not currently
nest.

> 
> Thanks for any ideas or suggestions.
> 
> Brian

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] [LXD] how to set container description (in a script)?

2019-03-12 Thread Stéphane Graber
On Tue, Mar 12, 2019 at 08:34:02PM +0900, Tomasz Chmielewski wrote:
> I would like to add some own "metadata" for a container.
> 
> I thought I'd use "description" for that, since it's present in "lxc config
> show ":
> 
> This however doesn't work:
> 
> $ lxc config show testcontainer | grep ^description
> description: ""
> 
> $ lxc config set testcontainer description "some description"
> Error: Invalid config: Unknown configuration key: description
> 
> 
> What would be the best way to set the description for a container?

Unfortunately the CLI doesn't have a great way to non-interactively
modify the description. Your best bet is to use `lxc config show` piped
through `sed` and then piped back into `lxc config edit`, something
like:

lxc config show testcontainer | sed "s/description:.*/description: \"blah\"/" | 
lxc config edit testcontainer

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] future of lxc/lxd? snap?

2019-02-25 Thread Stéphane Graber
snapd + LXD work fine on CentOS 7, it's even in our CI environment, so
presumably the same steps should work on RHEL 7.

On Mon, Feb 25, 2019 at 10:54 AM Fajar A. Nugraha  wrote:
>
> On Mon, Feb 25, 2019 at 3:15 PM Harald Dunkel  wrote:
>>
>> On 2/25/19 4:52 AM, Fajar A. Nugraha wrote:
>> >
>> > snapcraft.io  is also owned by Canonical.
>> >
>> > By using lxd snap, they can easly have lxd running on any distro that 
>> > already support snaps, without having to maintain separate packages.
>> >
>>
>> The problem is that there is no standard for all "major" distros,
>> as this discussion shows:
>>
>> https://www.reddit.com/r/redhat/comments/9lbm0c/snapd_for_rhel/
>>
>
> You mean "RHEL doesn't have snapd"? You'd have to ask redhat then.
>
>>
>> Debian already has an excellent packaging scheme.
>
>
> Sure.
>
> The question now is "is anybody willing to maintain debian lxd packages"
>
>>
>> The RPM world
>> doesn't follow snapd, as it seems.
>
>
> Really?
> https://docs.snapcraft.io/installing-snap-on-fedora/6755
>
>>
>> And if you prefer your favorite
>> tool inside a container you can find docker images everywhere.
>>
>> A few years ago compatibility was achieved on source code level.
>> Sorry to say, but you lost that for lxd. And snaps are not a
>> replacement.
>>
>
> In the past I've built private RPMs for lxd on centos. It became a hassle 
> though as (for example) I need to port additional packages as well. And I 
> needed to change the kernel to a newer one, unsupported by centos. But it 
> works.
>
> So if you're willing to build from source, it should still work.
>
> --
> Fajar
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] why does ssh + lxc hang? (used to work)

2019-02-25 Thread Stéphane Graber
This is https://github.com/lxc/lxd/issues/5519

On Sun, Feb 24, 2019 at 1:34 PM Tomasz Chmielewski  wrote:
>
> This works (executed on a host):
>
> host# lxc exec container -- date
> Sun Feb 24 12:25:21 UTC 2019
> host#
>
> This however hangs and doesn't return (executed from a remote system,
> i.e. your laptop or a different server):
>
> laptop$ ssh root@host "export PATH=$PATH:/snap/bin ; lxc exec container
> -- date"
> Sun Feb 24 12:28:04 UTC 2019
> (...command does not return...)
>
> Or a direct path to lxc binary - also hangs:
>
> laptop$ ssh root@host "/snap/bin/lxc exec container -- date"
> Sun Feb 24 12:29:54 UTC 2019
> (...command does not return...)
>
>
> Of course a simple "date" execution via ssh on the host does not hang:
>
> laptop$ ssh root@host date
> Sun Feb 24 12:31:33 UTC 2019
> laptop$
>
>
> Why do commands executed via ssh and lxc hang? It used to work some 1-2
> months ago, not sure with which lxd version it regressed like this.
>
>
> Tomasz Chmielewski
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Howto ignore metadata.yaml template definitions ?

2019-02-03 Thread Stéphane Graber
On Sat, Feb 02, 2019 at 02:18:05PM +0100, Oliver Dzombic wrote:
> Hi,
> 
> unfortunatelly the behaviour, to apply on certain actions specific
> template definitions to containers, ruines the stability of our containers.
> 
> We do not want our network configuration other files being edited ( and
> ruined ) when we copy containers.
> 
> Aswell as do not want the network configuration files to be (re)writen
> on the first start ( alias create ) of the containers.
> 
> Is there any way to use the templates from
> 
> images.linuxcontainers.org
> 
> And ignore/remove/clean the metadata.yaml file ?
> 
> Or can we tell lxc init, when we init a container, to ignore that ?
> 
> Thank you !

The API lets you modify the templates of a container which was created
from the image, so that any copies of that container will then be fine.

There's no way to directly alter the image in the same way as images are
stored read-only and are generally expected to match their source file
(so they match their fingerprint).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Current images:centos/7 broken

2019-02-03 Thread Stéphane Graber
, in updates

  Plugin Options:
[root@centos7 ~]#
---

It looks like you're directly chrooting to the rootfs downloaded by LXD,
this isn't supported and in this case, likely to fail due to the
recently introduced requirement on /dev/urandom for yum, which you
wouldn't have in your chroot unless you take care of setting up /dev,
/proc and /sys properly.

The command you ran would also have downloaded the container and shifted
it for unprivileged use, running stuff as real root through chroot will
mess up permissions.

On Sun, Feb 03, 2019 at 04:10:41PM +0100, Oliver Dzombic wrote:
> Hi,
> 
> the current centos/7 from images.linuxcontainers.org seems broken:
> 
> #lxc init images:centos/7 centos7
> 
> #chroot rootfs /bin/bash
> 
> # yum
> error: Failed to initialize NSS library
> There was a problem importing one of the Python modules
> required to run yum. The error leading to this problem was:
> 
>cannot import name ts
> 
> Please install a package which provides this module, or
> verify that the module is installed correctly.
> 
> It's possible that the above module doesn't match the
> current version of Python, which is:
> 2.7.5 (default, Oct 30 2018, 23:45:53)
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
> 
> If you cannot solve this problem yourself, please go to
> the yum faq at:
>   http://yum.baseurl.org/wiki/Faq
> 
> 
> -
> 
> who is by the way responsible for the builds ?
> 
> Thank you !
> 
> -- 
> Mit freundlichen Gruessen / Best regards
> 
> Oliver Dzombic
> Layer7 Networks
> 
> mailto:i...@layer7.net
> 
> Anschrift:
> 
> Layer7 Networks GmbH
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
> 
> HRB 96293 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
> _______
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] kernel messages appear in containers?

2019-02-03 Thread Stéphane Graber
Hi,

Yes, this is normal and as pointed out already, can be tweaked with
dmesg_restrict.

There was some interest a while back in implementing a logging
namespace, which would solve this cleanly, but it's never been enough
of a priority for anyone to actually do the kernel work for it.

Stéphane

On Sun, Feb 3, 2019 at 3:52 AM Richard Hector  wrote:
>
> Hi all,
>
> I've noticed that some log messages that really belong to the host (like
> those from monthly RAID checks, for example) can appear in arbitrary
> containers instead - so they're spread all over the place.
>
> Is that normal/fixable?
>
> Cheers,
> Richard
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] zfs @copy snapshots

2019-01-24 Thread Stéphane Graber
Hi,

If ZFS lets you, then yes, but normally those will be here because
you've created a container as a copy of this one, due to how zfs
datasets work, that snapshot then has to remain until the container
which was created from it is deleted.

Stéphane

On Wed, Jan 23, 2019 at 11:04 PM Kees Bos  wrote:
>
> Hi,
>
>
> I see multiple @copy snaphots on some containers (zfs)
>
> From https://github.com/lxc/lxd/issues/5104 it is not clear to me why
> there are multiple on a container.
>
> Can I safely remove these snapshots (if zfs lets me)?
>
>
> Kees
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Announcing LXC, LXD and LXCFS 3.0.3 bugfix releases

2018-11-22 Thread Stéphane Graber
The LXC/LXD/LXCFS team is happy to announce the third round of bugfix
releases for the 3.0 LTS branch of LXC, LXD and LXCFS.

This includes over two months of accumulated bugfixes and minor improvements.

The announcements for the 3 projects can be found here:

 - LXD 3.0.3: 
https://discuss.linuxcontainers.org/t/lxd-3-0-3-has-been-released/3359
 - LXC 3.0.3: 
https://discuss.linuxcontainers.org/t/lxc-3-0-3-has-been-released/3358
 - LXCFS 3.0.3: 
https://discuss.linuxcontainers.org/t/lxcfs-3-0-3-has-been-released/3355

LTS branches of those projects come with a 5 years support commitment
from upstream for security and bugfixes. The 3.0 branch is the current
LTS and is supported until June 2023.


We'd like to thank all of our contributors and our amazing community for
their contributions, bug reports and help testing those releases!

On behalf of the LXC, LXD and LXCFS teams,

Stéphane Graber


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] What happened to LXC in Bionic?

2018-11-19 Thread Stéphane Graber
On Mon, Nov 19, 2018 at 05:13:05PM -0600, Bryan Christ wrote:
> All of my current systems have been upgraded from 16.04 to 18.04.  For the
> first time today I did a fresh install of 18.04 and realized that just
> about every LXC tool that I"ve become accustomed to that I love is gone.  I
> guess I'm just now noticing it because the lxc tools that I've always used
> got carried over from the 16.04 install... and apparently you can't install
> those anymore.
> 
> What happened?  I just want to download containers from the templates
> library:
> 
> lxc-create -t download -n mycontainer
> 
> and configure them to work with a bridge adapter that I already have
> configured.  Can anyone point me to a primer on do this the "new way"?

apt install lxc-utils


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] /proc lost in some containers

2018-11-08 Thread Stéphane Graber
On Fri, Nov 09, 2018 at 04:29:46AM +0900, Tomasz Chmielewski wrote:
> LXD 3.6 from a snap on an up-to-date Ubuntu 18.04 server:
> 
> lxd   3.69510  stablecanonical✓  -
> 
> 
> Suddenly, some (but not all) containers lost their /proc filesystem:
> 
> # ps auxf
> Error: /proc must be mounted
>   To mount /proc at boot you need an /etc/fstab line like:
>   proc   /proc   procdefaults
>   In the meantime, run "mount proc /proc -t proc"
> 
> #
> 
> 
> I think I've seen something similar like this in the past.
> Can it be attributed to some not-so-well automatic snap upgrades?

That can either be a lxcfs crash or lxcfs bug of some kind.

Can you show "ps fauxww | grep lxcfs" on the host.

And then inside an affected container run "grep lxcfs /proc/mounts" and
for each of the /proc path listed, attempted to read them with "cat",
that should let you check if it's just one lxcfs file that's broken or
all of them.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Ubuntu 16/18.04 LTS switch from deb to snap for LXD

2018-09-30 Thread Stéphane Graber
On Sun, Sep 30, 2018 at 11:36:08PM +0200, MonkZ wrote:
> Hi,
> 
> i like to switch from deb to snap distribution for LXD under Ubuntu
> 16.04 and! 18.04. Are there migration howtos for LXD snap migration?
> 
> Regards

In general, you'd do:
 - snap install lxd

Or if you want to stick to 3.0:
 - snap install lxd --channel=3.0/stable

Then after the install is done, move your data across using:
 - lxd.migrate

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Running snapd within LXC/LXD on a Debian host?

2018-09-28 Thread Stéphane Graber
No need for nesting or privileged, snapd works fine in a fully secure
unprivileged container, so long as the kernel has support for
unprivileged fuse.

Make sure that:
 - Your distro kernel has unprivileged fuse enabled, I believe this
   would require a 4.18 kernel and may require some specific build options
   (unsure about that part).
 - You have the "fuse" package installed in the container, this has
   sometimes been a problem.
 - That /lib/modules exists in the container, if not, create it with
   mkdir, snapd is a bit picky about that sometimes.

On Fri, Sep 28, 2018 at 01:48:19PM +, bob-li...@vulpin.com wrote:
> From what I vaguely remember from the last time I tried, you might need to 
> either disable AppArmor (on the parent container?) or make it privileged. Or 
> possibly both.
> 
> Of course, this does mean you lose some of the security/isolation of 
> containerisation.
> 
> Bob
> 
> -Original Message-
> From: lxc-users  On Behalf Of 
> Linus Lüssing
> Sent: Saturday, 15 September 2018 5:02 AM
> To: lxc-users@lists.linuxcontainers.org; d...@ybit.eu
> Subject: [lxc-users] Running snapd within LXC/LXD on a Debian host?
> 
> Hi,
> 
> I found the following, excellent article online:
> 
> https://blog.ubuntu.com/2016/02/16/running-snaps-in-lxd-containers
> 
> And I'm currently trying to achieve the same on an LXD host running Debian 
> Stretch and a Container running Ubuntu 18.04.
> 
> The error I'm now getting within the container is the following though:
> 
> -
> $ journalctl -xe
> [...]
> -- Subject: Unit snapd.service has begun start-up
> -- Defined-By: systemd
> -- Support: http://www.ubuntu.com/support
> --
> -- Unit snapd.service has begun starting up.
> Sep 14 17:42:09 rocketchat2 snapd[195]: AppArmor status: apparmor is enabled 
> but some features are missing: dbus, network Sep 14 17:42:09 rocketchat2 
> snapd[195]: error: cannot start snapd: cannot mount squashfs image using 
> "fuse.squashfuse": mount: /tmp/selftest-mountpoint-412081678: wrong fs type, 
> bad option, bad superblock on /tmp/selftest-squashfs-971713707, missing 
> codepage or helper program, or other error.
> Sep 14 17:42:09 rocketchat2 systemd[1]: snapd.service: Main process exited, 
> code=exited, status=1/FAILURE Sep 14 17:42:09 rocketchat2 systemd[1]: 
> snapd.service: Failed with result 'exit-code'.
> Sep 14 17:42:09 rocketchat2 systemd[1]: Failed to start Snappy daemon.
> -- Subject: Unit snapd.service has failed
> -- Defined-By: systemd
> -- Support: http://www.ubuntu.com/support
> --
> -- Unit snapd.service has failed.
> -
> 
> And I'm also getting some "DENIED" messages from apparmor in dmesg. See 
> attachment.
> 
> I tried both a 4.17 kernel provided by Debian Stretch-Backports and a 4.18 
> kernel from Debian Testing. The kernel cmdline looks like this for 4.18 for 
> instance:
> 
> -
> $ uname -a
> Linux yServer 4.18.0-1-amd64 #1 SMP Debian 4.18.6-1 (2018-09-06) x86_64 
> GNU/Linux $ cat /proc/cmdline
> BOOT_IMAGE=/boot/vmlinuz-4.18.0-1-amd64 
> root=UUID=f59f51b8-93ba-45e7-b0d7-c7013c52c11c ro quiet apparmor=1 
> security=apparmor
> -
> 
> The squashfuse package is installed successfully within the container:
> 
> -
> $ dpkg -l | grep squashfuse
> ii  squashfuse  0.1.100-0ubuntu2  amd64   
>  FUSE filesystem to mount squashfs archives
> -
> 
> 
> Are the kernels provided by Debian supposed to work for snapd within LXD? Or 
> are there some non-upstream patches added to the Ubuntu kernel which are 
> necessary to make things work as described in the blog post?
> 
> Regards,
> Linus
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to set default volume size using the "volume.size" property on the pool

2018-09-26 Thread Stéphane Graber
You set the size property on the root device of the container, then
restart the container, that should cause LXD to resize it on startup.

On Wed, Sep 26, 2018 at 09:20:02AM +0200, Kees Bakker wrote:
> Thanks, that works.
> 
> Next, how do I change the volume size of an existing container?
> 
> On 25-09-18 17:01, Stéphane Graber wrote:
> > No worries, we tend to prefer support requests to go here or to
> > https://discuss.linuxcontainers.org
> >
> > The command should have actually been:
> >
> > lxc profile device set default root size 50GB
> >
> > On Tue, Sep 25, 2018 at 04:11:05PM +0200, Kees Bakker wrote:
> >> This is a follow up of https://github.com/lxc/lxd/issues/5069
> >> (( Sorry for creating the issue, Stéphane. I thought that "issues"
> >> were not just for bugs. ))
> >>
> >> You said: "lxc profile set default root size 50GB should do the trick"
> >>
> >> Alas, ...
> >>
> >> root@maas:~# lxc profile set default root size 50GB
> >> Description:
> >>   Set profile configuration keys
> >>
> >> Usage:
> >>   lxc profile set [:]   [flags]
> >>
> >> Global Flags:
> >>   --debug Show all debug messages
> >>   --force-local   Force using the local unix socket
> >>   -h, --help  Print help
> >>   -v, --verbose   Show all information messages
> >>   --version   Print version number
> >> Error: Invalid number of arguments
> >>
> >> Furthermore.
> >> Somewhere else I read your suggestion to set volume.size but
> >> I was not able to get anything useful result. I did set the
> >> volume,size of my storage pool, but new containers were still
> >> created with the 10GB default.
> >>
> >> And, perhaps related, if I have a container with a bigger volume than 10GB,
> >> then it fails to copy that container. (Copying that container to another 
> >> LXD
> >> server with BTRFS, succeeds without problem.)
> >>
> >> Notice that I'm using LVM storage, on Ubuntu 18.04 with LXD/LXC 3.0.1
> >> -- Kees
> >>
> >>
> >>
> >> On 24-09-18 08:56, Kees Bakker wrote:
> >>> This is still unanswered.
> >>>
> >>> How do I set the default volume size of the storage pool?
> >>>
> >>> On 13-09-18 10:19, Kees Bakker wrote:
> >>>> Hey,
> >>>>
> >>>> Forgive my ignorance, but how would you do that? I have a setup with LVM
> >>>> and the default volume size is 10G. I wish to increase that default,
> >>>> what would be the command syntax? Also I want to see the current
> >>>> default settings, just so I know I'm on the right track.
> >>>>
> >>>> My pool is called "local".
> >>>>
> >>>> # lxc storage show local
> >>>> config:
> >>>>   lvm.thinpool_name: LXDThinPool
> >>>>   lvm.vg_name: local
> >>>> description: ""
> >>>> name: local
> >>>> driver: lvm
> >>>> used_by:
> >>>> - /1.0/containers/bionic01
> >>>> - /1.0/containers/kanboard
> >>>> - /1.0/containers/license4
> >>>> - /1.0/containers/usrv1
> >>>> - /1.0/containers/usrv1/snapshots/after-aptinstall-freeipa
> >>>> - 
> >>>> /1.0/images/7079d12b3253102b829d0fdd6f1f693a1654057ec054542e9e7506c7cf54fa2e
> >>>> - 
> >>>> /1.0/images/c395a7105278712478ec1dbfaab1865593fc11292f99afe01d5b94f1c34a9a3a
> >>>> - /1.0/profiles/default
> >>>> - /1.0/profiles/default_pub
> >>>> - /1.0/profiles/testprof
> >>>> status: Created
> >>>> locations:
> >>>> - maas
> >>>>
> >>>> There is no volume.size. Should I just add it?
> >>> ___
> >>> lxc-users mailing list
> >>> lxc-users@lists.linuxcontainers.org
> >>> http://lists.linuxcontainers.org/listinfo/lxc-users
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to set default volume size using the "volume.size" property on the pool

2018-09-25 Thread Stéphane Graber
No worries, we tend to prefer support requests to go here or to
https://discuss.linuxcontainers.org

The command should have actually been:

lxc profile device set default root size 50GB

On Tue, Sep 25, 2018 at 04:11:05PM +0200, Kees Bakker wrote:
> This is a follow up of https://github.com/lxc/lxd/issues/5069
> (( Sorry for creating the issue, Stéphane. I thought that "issues"
> were not just for bugs. ))
> 
> You said: "lxc profile set default root size 50GB should do the trick"
> 
> Alas, ...
> 
> root@maas:~# lxc profile set default root size 50GB
> Description:
>   Set profile configuration keys
> 
> Usage:
>   lxc profile set [:]   [flags]
> 
> Global Flags:
>   --debug Show all debug messages
>   --force-local   Force using the local unix socket
>   -h, --help  Print help
>   -v, --verbose   Show all information messages
>   --version   Print version number
> Error: Invalid number of arguments
> 
> Furthermore.
> Somewhere else I read your suggestion to set volume.size but
> I was not able to get anything useful result. I did set the
> volume,size of my storage pool, but new containers were still
> created with the 10GB default.
> 
> And, perhaps related, if I have a container with a bigger volume than 10GB,
> then it fails to copy that container. (Copying that container to another LXD
> server with BTRFS, succeeds without problem.)
> 
> Notice that I'm using LVM storage, on Ubuntu 18.04 with LXD/LXC 3.0.1
> -- Kees
> 
> 
> 
> On 24-09-18 08:56, Kees Bakker wrote:
> > This is still unanswered.
> >
> > How do I set the default volume size of the storage pool?
> >
> > On 13-09-18 10:19, Kees Bakker wrote:
> >> Hey,
> >>
> >> Forgive my ignorance, but how would you do that? I have a setup with LVM
> >> and the default volume size is 10G. I wish to increase that default,
> >> what would be the command syntax? Also I want to see the current
> >> default settings, just so I know I'm on the right track.
> >>
> >> My pool is called "local".
> >>
> >> # lxc storage show local
> >> config:
> >>   lvm.thinpool_name: LXDThinPool
> >>   lvm.vg_name: local
> >> description: ""
> >> name: local
> >> driver: lvm
> >> used_by:
> >> - /1.0/containers/bionic01
> >> - /1.0/containers/kanboard
> >> - /1.0/containers/license4
> >> - /1.0/containers/usrv1
> >> - /1.0/containers/usrv1/snapshots/after-aptinstall-freeipa
> >> - 
> >> /1.0/images/7079d12b3253102b829d0fdd6f1f693a1654057ec054542e9e7506c7cf54fa2e
> >> - 
> >> /1.0/images/c395a7105278712478ec1dbfaab1865593fc11292f99afe01d5b94f1c34a9a3a
> >> - /1.0/profiles/default
> >> - /1.0/profiles/default_pub
> >> - /1.0/profiles/testprof
> >> status: Created
> >> locations:
> >> - maas
> >>
> >> There is no volume.size. Should I just add it?
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc publish - how to use "pigz" (parallel gzip) for compression?

2018-09-21 Thread Stéphane Graber
On Fri, Sep 21, 2018 at 09:22:46AM +0200, Tomasz Chmielewski wrote:
> On 2018-09-21 09:11, lxc-us...@licomonch.net wrote:
> 
> > maybe not what you are looking for, but could work as workaround for the
> > moment:
> > mv /snap/core/4917/bin/gzip /snap/core/4917/bin/gzip_dist
> > ln -s /usr/bin/pigz /snap/core/4917/bin/gzip
> 
> Nope, it's snap, read-only:

mount -o bind /usr/bin/pigz /snap/core/4917/bin/gzip

That may work. But won't survive core snap update or reboots.

> # touch /snap/core/4917/bin/anything
> touch: cannot touch '/snap/core/4917/bin/anything': Read-only file system

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Containers and checkpoint/restart micro-conference at LPC2018

2018-09-07 Thread Stéphane Graber
On Mon, Aug 13, 2018 at 12:10:15PM -0400, Stéphane Graber wrote:
> Hello,
> 
> This year's edition of the Linux Plumbers Conference will once again
> have a containers micro-conference but this time around we'll have twice
> the usual amount of time and will include the content that would
> traditionally go into the checkpoint/restore micro-conference.
> 
> LPC2018 will be held in Vancouver, Canada from the 13th to the 15th of
> November, co-located with the Linux Kernel Summit.
> 
> 
> We're looking for discussion topics around kernel work related to
> containers and namespacing, resource control, access control,
> checkpoint/restore of kernel structures, filesystem/mount handling for
> containers and any related userspace work.
> 
> 
> The format of the event will mostly be discussions where someone
> introduces a given topic/problem and it then gets discussed for 20-30min
> before moving on to something else. There will also be limited room for
> short demos of recent work with shorter 15min slots.
> 
> 
> Details can be found here:
> 
>   
> https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2018/2417
> 
> 
> Looking forward to seeing you in Vancouver!

Hello,

We've added an extra week to the CFP, new deadline is Friday 14th of September.

If you were thinking about sending something bug then forgot or just
missed the deadline, now is your chance to send it!

Stéphane


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Announcing LXC, LXD and LXCFS 3.0.2 bugfix releases

2018-08-22 Thread Stéphane Graber
The LXC/LXD/LXCFS team is happy to announce the second round of bugfix
releases for the 3.0 LTS branch of LXC, LXD and LXCFS.

This includes over two months of accumulated bugfixes as well as the fix
for the recently fixed LXC security issue (CVE 2018-6556).

The announcements for the 3 projects can be found here:

 - LXD 3.0.2: 
https://discuss.linuxcontainers.org/t/lxd-3-0-2-has-been-released/2505/2
 - LXC 3.0.2: 
https://discuss.linuxcontainers.org/t/lxc-3-0-2-has-been-released/2504/2
 - LXCFS 3.0.2: 
https://discuss.linuxcontainers.org/t/lxcfs-3-0-2-has-been-released/2503/2

LTS branches of those projects come with a 5 years support commitment
from upstream for security and bugfixes. The 3.0 branch is the current
LTS and is supported until June 2023.


We'd like to thank all of our contributors and our amazing community for
their contributions, bug reports and help testing those releases!

On behalf of the LXC, LXD and LXCFS teams,

Stéphane Graber


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ZFS configuration

2018-08-21 Thread Stéphane Graber
On Tue, Aug 21, 2018 at 02:40:12PM -0400, Stephen Brown Jr wrote:
> Hello,
> I am just getting started with LXD. I have an existing zfs pool, and
> want to use a ZFS dataset on that pool to store my containers on.
> 
> I ran the command lxc storage create pool1 zfs source=fast/containers, and
> it appeared to create it, however, I do not see it in the /fast directory
> nor does zpool status list this.

It's not visible under /fast because LXD configures the dataset it
creates to get mounted under its own directory.

It also wouldn't show up in "zpool" because it's a dataset within your
"fast" zpool. You will see it in "zfs list" though.

> It's possible that I don't understand how this works however. I do see it
> created if I run the command lxc storage list, it does indeed show up:
> 
> | pool1   | | zfs| fast/containers| 0
> 
> 
> I created a container for testing thinking it would show up, but no go on
> that either.
> 
> Would like to understand how this is implemented and what I should expect?
> 
> Thanks,
> Stephen


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd not starting dnsmasq

2018-08-20 Thread Stéphane Graber
On Mon, Aug 20, 2018 at 10:57:28AM -0700, Mike Wright wrote:
> Hi all,
> 
> My system is ubuntu bionic, kernel 4.15, lxd 3.2 (non-snap), fully upgraded.
> 
> After starting LXD (systemctl start lxd.socket lxd) I can create containers
> which get an eth0 that's defined in /etc/netplan/50-cloud-init.yaml as dhcp
> but they don't pick up an address.
> 
> Further investigation shows that dnsmasq is not running on the host.  I
> haven't been able to find any docs about this.  I'd start it manually but I
> can't find where/how the config for it is stored/created.
> 
> "lxc network list" shows this:
> 
> ++--+-+-+-+
> | LXD| bridge   | NO  | | 1   |
> ++--+-+-+-+
> 
> Thinking this may be the source of the problem I tried "lxc network set LXD
> dns.mode managed" and got "Error: Only managed networks can be modified."
> Câline! (thank you, netflix)
> 
> I can manually add addresses to the bridge and containers and networking
> works but I'd prefer dhcp.
> 
> I've spent days chasing this.  Uninstall, purge, install, and nothing.
> 
> What really perplexes me is that the first couple of times I installed lxd
> dnsmasq always was started when lxd was started.  (Lot of (un)installs
> trying to figure out zfs :/ ).  Now it never starts dnsmasq.
> 
> Anybody have any pointers?
> 
> Thanks,
> Mike Wright

That "br0" bridge you have is not one that's created and managed by LXD,
which explains why it wouldn't let you change settings and why it's not
doing anything to it (including running dnsmasq).

Either configure that bridge to do DHCP, using whatever tool was used to
define/create it. Or get rid of it and create a LXD managed bridge with
"lxc network create".

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD cluster is unresponsible: all lxc related commands hangs

2018-08-18 Thread Stéphane Graber
Hi,

Your logs show multiple LXD processes running at the same time.
The latest revision of the stable snap (8393) should have a fix which
detects that and clean things up.

Stéphane

On Fri, Aug 17, 2018 at 10:46:29PM +0300, Andriy Tovstik wrote:
> Hi!
> 
> Both lxd.log are empty :(
> 
> ps output - in attachment
> 
> пт, 17 авг. 2018 г. в 18:06, Stéphane Graber :
> 
> > On Fri, Aug 17, 2018 at 05:56:24PM +0300, Andriy Tovstik wrote:
> > > Hi!
> > >
> > > пт, 17 авг. 2018 г. в 17:46, Stéphane Graber :
> > >
> > > > On Fri, Aug 17, 2018 at 01:20:42PM +0300, Andriy Tovstik wrote:
> > > > > Hi, all!
> > > > >
> > > > > Some time ago I installed a dual node LXD cluster. Today I logged in
> > to
> > > > the
> > > > > node and tried to execute
> > > > > lxc exec container -- bash
> > > > > but command hanged.
> > > > > Also, all lxc commands are unresponsible: i'm not able to interact
> > with
> > > > my
> > > > > cluster and my containers.
> > > > > I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
> > > > > snap.lxd.daemon - in attachment.
> > > > >
> > > > > Any suggestion?
> > > >
> > > > Are both nodes running the same snap revision according to `snap list`?
> > > >
> > > > LXD cluster nodes must all run the exact same version, otherwise they
> > > > effectively wait until this becomes the case before they start replying
> > > > to API queries.
> > > >
> > >
> > > snap output:
> > > $ snap list lxd
> > > Name  Version  Rev   Tracking  Publisher  Notes
> > > lxd   3.4  8297  stablecanonical  -
> > >
> > >  the same on both nodes
> >
> > Thanks, can you provide the following from both nodes:
> >  - ps fauxww
> >  - cat /var/snap/lxd/common/lxd/logs/lxd.log
> >
> > And can you try running "lxc cluster list" see if that gets stuck too?
> >
> >
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users



> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD cluster is unresponsible: all lxc related commands hangs

2018-08-17 Thread Stéphane Graber
That's somewhat unlikely though it's hard to tell post-reboot.

It could be that something upset the kernel (zfs has been known to do
that sometimes) and LXD couldn't be killed anymore as it was stuck on
the kernel.

If this happens to you again, try recording the following before rebooting:
 - journalctl -u snap.lxd.daemon -n300
 - dmesg
 - ps fauxww

This will usually be enough to determine what was at fault.

Stéphane

On Fri, Aug 17, 2018 at 11:58:35PM +0800, ronkaluta wrote:
> I tried snap refresh alone but did not fix the problem.
> 
> Could it be something involved with the kernel update
> 
> linux-image-4.15.0-32?
> 
> 
> On Friday, August 17, 2018 11:56 PM, Stéphane Graber wrote:
> > Apt upgrades shouldn't really be needed, though certainly good to make
> > sure the rest of the system stays up to date :)
> > 
> > The most important part is ensuring that all cluster nodes run the same
> > version of LXD (3.4 in this case), once they all do, the cluster should
> > allow queries.
> > 
> > This upgrade procedure isn't so great and we're well aware of it.
> > I'll open a Github issue to track some improvements we should be making
> > to make such upgrades much more seamless.
> > 
> > On Fri, Aug 17, 2018 at 11:26:08PM +0800, ronkaluta wrote:
> > > I just had roughly the same problem.
> > > 
> > > The way I cured it was to snap refresh
> > > 
> > > then sudo apt update and then
> > > 
> > > sudo apt upgrade
> > > 
> > > current lxd snap is 3.4
> > > 
> > > current linux-image-4.15.0-32-generic
> > > 
> > > I then rebooted.
> > > 
> > > (same procedure on all machines)
> > > 
> > > 
> > > On Friday, August 17, 2018 10:46 PM, Stéphane Graber wrote:
> > > > On Fri, Aug 17, 2018 at 01:20:42PM +0300, Andriy Tovstik wrote:
> > > > > Hi, all!
> > > > > 
> > > > > Some time ago I installed a dual node LXD cluster. Today I logged in 
> > > > > to the
> > > > > node and tried to execute
> > > > > lxc exec container -- bash
> > > > > but command hanged.
> > > > > Also, all lxc commands are unresponsible: i'm not able to interact 
> > > > > with my
> > > > > cluster and my containers.
> > > > > I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
> > > > > snap.lxd.daemon - in attachment.
> > > > > 
> > > > > Any suggestion?
> > > > Are both nodes running the same snap revision according to `snap list`?
> > > > 
> > > > LXD cluster nodes must all run the exact same version, otherwise they
> > > > effectively wait until this becomes the case before they start replying
> > > > to API queries.
> > 
> > 
> > 
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD cluster is unresponsible: all lxc related commands hangs

2018-08-17 Thread Stéphane Graber
Apt upgrades shouldn't really be needed, though certainly good to make
sure the rest of the system stays up to date :)

The most important part is ensuring that all cluster nodes run the same
version of LXD (3.4 in this case), once they all do, the cluster should
allow queries.

This upgrade procedure isn't so great and we're well aware of it.
I'll open a Github issue to track some improvements we should be making
to make such upgrades much more seamless.

On Fri, Aug 17, 2018 at 11:26:08PM +0800, ronkaluta wrote:
> I just had roughly the same problem.
> 
> The way I cured it was to snap refresh
> 
> then sudo apt update and then
> 
> sudo apt upgrade
> 
> current lxd snap is 3.4
> 
> current linux-image-4.15.0-32-generic
> 
> I then rebooted.
> 
> (same procedure on all machines)
> 
> 
> On Friday, August 17, 2018 10:46 PM, Stéphane Graber wrote:
> > On Fri, Aug 17, 2018 at 01:20:42PM +0300, Andriy Tovstik wrote:
> > > Hi, all!
> > > 
> > > Some time ago I installed a dual node LXD cluster. Today I logged in to 
> > > the
> > > node and tried to execute
> > > lxc exec container -- bash
> > > but command hanged.
> > > Also, all lxc commands are unresponsible: i'm not able to interact with my
> > > cluster and my containers.
> > > I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
> > > snap.lxd.daemon - in attachment.
> > > 
> > > Any suggestion?
> > Are both nodes running the same snap revision according to `snap list`?
> > 
> > LXD cluster nodes must all run the exact same version, otherwise they
> > effectively wait until this becomes the case before they start replying
> > to API queries.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD cluster is unresponsible: all lxc related commands hangs

2018-08-17 Thread Stéphane Graber
On Fri, Aug 17, 2018 at 01:20:42PM +0300, Andriy Tovstik wrote:
> Hi, all!
> 
> Some time ago I installed a dual node LXD cluster. Today I logged in to the
> node and tried to execute
> lxc exec container -- bash
> but command hanged.
> Also, all lxc commands are unresponsible: i'm not able to interact with my
> cluster and my containers.
> I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
> snap.lxd.daemon - in attachment.
> 
> Any suggestion?

Are both nodes running the same snap revision according to `snap list`?

LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Containers and checkpoint/restart micro-conference at LPC2018

2018-08-13 Thread Stéphane Graber
Hello,

This year's edition of the Linux Plumbers Conference will once again
have a containers micro-conference but this time around we'll have twice
the usual amount of time and will include the content that would
traditionally go into the checkpoint/restore micro-conference.

LPC2018 will be held in Vancouver, Canada from the 13th to the 15th of
November, co-located with the Linux Kernel Summit.


We're looking for discussion topics around kernel work related to
containers and namespacing, resource control, access control,
checkpoint/restore of kernel structures, filesystem/mount handling for
containers and any related userspace work.


The format of the event will mostly be discussions where someone
introduces a given topic/problem and it then gets discussed for 20-30min
before moving on to something else. There will also be limited room for
short demos of recent work with shorter 15min slots.


Details can be found here:

  
https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2018/2417


Looking forward to seeing you in Vancouver!

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-12 Thread Stéphane Graber
On Sun, Aug 12, 2018 at 07:46:40PM +0200, Pierre Couderc wrote:
> 
> On 08/12/2018 07:33 PM, Pierre Couderc wrote:
> > 
> > 
> > On 08/12/2018 05:07 PM, Stéphane Graber wrote:
> > > .
> > > 
> > Thank  you Stéphane, it builds on Debian 9.5, Now, I shall install it !
> But it fails to start with :
> nous@couderc:~/go/src/github.com/lxc/lxd$ sudo -E $GOPATH/bin/lxd --group
> sudo
> /home/nous/go/bin/lxd: error while loading shared libraries: libdqlite.so.0:
> cannot open shared object file: No such file or directory

The way you invoked sudo will strip your LD_LIBRARY_PATH causing that error.
You can instead run "sudo -E -s" and then run "lxd --group sudo" and it
should then start fine.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-12 Thread Stéphane Graber
On Sun, Aug 12, 2018 at 09:09:41AM +0200, Pierre Couderc wrote:
> 
> 
> On 08/11/2018 11:45 PM, Stéphane Graber wrote:
> > 
> > https://github.com/lxc/lxd/pull/4908 will take care of the
> > PKG_CONFIG_PATH problem. With that I can build fine without
> > libsqlite3-dev on my system.
> Merci pour la réaction du samedi soir...
> 
> I had too to install libuv1-dev. I suggest updating the requirements in
> https://github.com/lxc/lxd/blob/master/README.md
> 
> I have progressed but I am blocked (on debian 9.5) now by :
> 
> shared/idmap/shift_linux.go:23:28: fatal error: sys/capability.h: No such
> file or directory
>  #include 

Install libcap-dev, that should provide that header. I'll look at
updating the dependencies (external to Go anyway).

> 
> I have tried (without knowing nothing of what is  it) :
> 
> sudo ln -s /usr/include/linux/capability.h /usr/include/sys/capability.h
> 
> but it is not better :  unknown type name 'uint32_t'...
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-11 Thread Stéphane Graber
On Sat, Aug 11, 2018 at 08:37:04AM +0200, Pierre Couderc wrote:
> Trying to build lxd from sources, I get a message about sqlite3 missing, and
> an invite to "make deps".
> 
> But it fails too with :
> 
> 
> No package 'sqlite3' found

https://github.com/lxc/lxd/pull/4908 will take care of the
PKG_CONFIG_PATH problem. With that I can build fine without
libsqlite3-dev on my system.

> 
> Consider adjusting the PKG_CONFIG_PATH environment variable if you
> installed software in a non-standard prefix.
> 
> Alternatively, you may set the environment variables sqlite_CFLAGS
> and sqlite_LIBS to avoid the need to call pkg-config.
> See the pkg-config man page for more details
> 
> 
> And the man pkg-config is not clear to me...
> 
> Thanks  for help.
> 
> 
> PC
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Convert virtual machine to LXC container

2018-08-08 Thread Stéphane Graber
On Wed, Aug 08, 2018 at 07:04:31PM -0400, Saint Michael wrote:
> Has anybody invented a procedure, a script, etc., to convert a running
> machine to a LXC container? I was thinking to create a container of the
> same OS, and then use rsync, excluding /proc /tmp/ /sys etc.  Any ideas?

We have that for LXD, it's lxd-p2c and does pretty much exactly what you
describe but uses the LXD migration API as the rsync target effectively
(so rsync over custom websocket over https).


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] "error: LXD still not running after 5 minutes" - failed lxd.migrate - how to recover?

2018-08-08 Thread Stéphane Graber
On Wed, Aug 08, 2018 at 11:26:10PM +0200, Tomasz Chmielewski wrote:
> On 2018-08-08 22:26, Stéphane Graber wrote:
> 
> > > Not sure how to recover now? The containers seem intact in
> > > /var/lib/lxd/
> > 
> > What do you get if you do "journalctl -u snap.lxd.daemon -n 300" and
> 
> -- Logs begin at Thu 2018-07-12 06:07:13 UTC, end at Wed 2018-08-08 21:07:13
> UTC. --
> Aug 08 18:21:12 b1 systemd[1]: Started Service for snap application
> lxd.daemon.
> Aug 08 18:21:12 b1 lxd.daemon[12581]: => Preparing the system
> Aug 08 18:21:12 b1 lxd.daemon[12581]: ==> Creating missing snap
> configuration
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Loading snap configuration
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Setting up mntns symlink
> (mnt:[4026532794])
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Setting up kmod wrapper
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Preparing /boot
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Preparing a clean copy of /run
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Preparing a clean copy of /etc
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Setting up ceph configuration
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Setting up LVM configuration
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Rotating logs
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Setting up ZFS (0.7)
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Escaping the systemd cgroups
> Aug 08 18:21:13 b1 lxd.daemon[12581]: ==> Escaping the systemd process
> resource limits
> Aug 08 18:21:41 b1 systemd[1]: Stopping Service for snap application
> lxd.daemon...
> Aug 08 18:21:42 b1 lxd.daemon[13595]: => Stop reason is: host shutdown
> Aug 08 18:21:42 b1 lxd.daemon[13595]: => Stopping LXD (with container
> shutdown)
> Aug 08 18:21:42 b1 lxd.daemon[13595]: => Stopping LXCFS
> Aug 08 18:21:43 b1 systemd[1]: Stopped Service for snap application
> lxd.daemon.
> Aug 08 18:21:44 b1 systemd[1]: Started Service for snap application
> lxd.daemon.
> Aug 08 18:21:44 b1 lxd.daemon[13676]: => Preparing the system
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Loading snap configuration
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Setting up mntns symlink
> (mnt:[4026532794])
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Setting up kmod wrapper
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Preparing /boot
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Preparing a clean copy of /run
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Preparing a clean copy of /etc
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Setting up ceph configuration
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Setting up LVM configuration
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Rotating logs
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Setting up ZFS (0.7)
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Escaping the systemd cgroups
> Aug 08 18:21:44 b1 lxd.daemon[13676]: ==> Escaping the systemd process
> resource limits
> Aug 08 18:21:44 b1 lxd.daemon[13676]: => Starting LXCFS
> Aug 08 18:21:44 b1 lxd.daemon[13676]: => Starting LXD
> Aug 08 18:21:44 b1 lxd.daemon[13676]: lvl=warn msg="AppArmor support has
> been disabled because of lack of kernel support" t=2018-08-08T18:21:44+
> Aug 08 18:21:44 b1 lxd.daemon[13676]: lvl=warn msg="CGroup memory swap
> accounting is disabled, swap limits will be ignored."
> t=2018-08-08T18:21:44+
> Aug 08 18:21:44 b1 lxd.daemon[13676]: mount namespace: 5
> Aug 08 18:21:44 b1 lxd.daemon[13676]: hierarchies:
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   0: fd:   6: hugetlb
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   1: fd:   7: pids
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   2: fd:   8: cpuset
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   3: fd:   9: perf_event
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   4: fd:  10: freezer
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   5: fd:  11: memory
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   6: fd:  12: devices
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   7: fd:  13: blkio
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   8: fd:  14: cpu,cpuacct
> Aug 08 18:21:44 b1 lxd.daemon[13676]:   9: fd:  15: net_cls,net_prio
> Aug 08 18:21:44 b1 lxd.daemon[13676]:  10: fd:  16: rdma
> Aug 08 18:21:44 b1 lxd.daemon[13676]:  11: fd:  17: name=systemd
> Aug 08 18:21:44 b1 lxd.daemon[13676]:  12: fd:  18: unified
> Aug 08 18:28:07 b1 systemd[1]: Stopping Service for snap application
> lxd.daemon...
> Aug 08 18:28:07 b1 lxd.daemon[18773]: => Stop reason is: host shutdown
> Aug 08 18:28:07 b1 lxd.daemon[18773]: => Stopping LXD (with container
> shutdown)
> Aug 08 18:37:24 b1 lxd.daemon[18773]: => Stopping LXCFS
> Aug 08 18:37:25 b1 systemd[1]: Stopped Service for snap a

Re: [lxc-users] "error: LXD still not running after 5 minutes" - failed lxd.migrate - how to recover?

2018-08-08 Thread Stéphane Graber
On Wed, Aug 08, 2018 at 09:06:40PM +0200, Tomasz Chmielewski wrote:
> I've tried to migrate from deb to snap on Ubuntu 18.04.
> 
> Unfortunately, lxd.migrate failed with "error: LXD still not running after 5
> minutes":
> 
> root@b1 ~ # lxd.migrate
> => Connecting to source server
> => Connecting to destination server
> => Running sanity checks
> 
> === Source server
> LXD version: 3.0.1
> LXD PID: 2656
> Resources:
>   Containers: 6
>   Images: 4
>   Networks: 1
>   Storage pools: 1
> 
> === Destination server
> LXD version: 3.3
> LXD PID: 12791
> Resources:
>   Containers: 0
>   Images: 0
>   Networks: 0
>   Storage pools: 0
> 
> The migration process will shut down all your containers then move your data
> to the destination LXD.
> Once the data is moved, the destination LXD will start and apply any needed
> updates.
> And finally your containers will be brought back to their previous state,
> completing the migration.
> 
> WARNING: /var/lib/lxd is a mountpoint. You will need to update that mount
> location after the migration.
> 
> Are you ready to proceed (yes/no) [default=no]? yes
> => Shutting down the source LXD
> => Stopping the source LXD units
> => Stopping the destination LXD unit
> => Unmounting source LXD paths
> => Unmounting destination LXD paths
> => Wiping destination LXD clean
> => Backing up the database
> => Moving the /var/lib/lxd mountpoint
> => Updating the storage backends
> => Starting the destination LXD
> => Waiting for LXD to come online
> 
> error: LXD still not running after 5 minutes.
> 
> 
> 
> root@b1 ~ # lxd.migrate
> => Connecting to source server
> error: Unable to connect to the source LXD: Get http://unix.socket/1.0: dial
> unix /var/lib/lxd/unix.socket: connect: no such file or directory
> 
> 
> 
> root@b1 ~ # lxc list
> Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket:
> connect: no such file or directory
> 
> 
> 
> Not sure how to recover now? The containers seem intact in /var/lib/lxd/

What do you get if you do "journalctl -u snap.lxd.daemon -n 300" and
anything useful looking in /var/snap/lxd/common/lxd/logs/lxd.log?

It's expected that "systemctl start lxd" won't work anymore since the
data was moved over to the snap which then likely caused your database
to be upgraded, making it unreadable for your older deb version of LXD.

I'd recommend you do:
 - systemctl stop lxd lxd.socket
 - systemctl mask lxd lxd.socket

To prevent any accidental startup of your old LXD until the snap
migration is done and it can be safely removed.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] CVE-2018-6556: lxc-user-nic allows for open() of arbitrary paths

2018-08-06 Thread Stéphane Graber
Hello,

This is a notice for a security issue affecting the following LXC versions:
 - 2.0.9 and higher
 - 3.0.0 and higher


Description of the issue:
  lxc-user-nic (setuid) when asked to delete a network interface will
  unconditionally open a user provided path.

  This code path may be used by an unprivileged user to check for
  the existence of a path which they wouldn't otherwise be able to reach.

  It may also be used to trigger side effects by causing a (read-only) open
  of special kernel files (ptmx, proc, sys).

This was reported to us by Matthias Gerstner from SUSE and Christian
Brauner on the LXC team took care of finding a workable solution and
preparing the needed updates.


Fixes:
 - stable-2.0: 
https://github.com/lxc/lxc/commit/5eb45428b312e978fb9e294dde16efb14dd9fa4d
 - stable-3.0: 
https://github.com/lxc/lxc/commit/c1cf54ebf251fdbad1e971679614e81649f1c032
 - master: 
https://github.com/lxc/lxc/commit/f26dc127bf5d66e8c29f8584c64bd97c9bbbc574

Linux distributions were privately notified with about a week notice and
so should have security updates ready for this already, or will shortly.

We will not be issuing emergency release tarballs for this issue so if
you're maintaining your own build, you should be cherry-picking one of
the fixes above. We do however intend to release LXC 3.0.2 very shortly
which will include this fix among other traditional bugfixes.

References:
 - https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1783591
 - https://bugzilla.suse.com/show_bug.cgi?id=988348

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] nvidia driver & runtime passthrough without lxd?

2018-07-27 Thread Stéphane Graber
On Fri, Jul 27, 2018 at 08:06:59PM -0700, Forest wrote:
> Is there a way to use the new nvidia runtime passthrough feature with plain
> lxc containers?  That is, without lxd at all?  If so, can someone point me
> toward a doc on how to do this?

Not sure how much documentation there's out there about this, but it's
effectively just about adding the nvidia hook to your config and setting
a few environment variables to control its behavior.

https://github.com/lxc/lxc/blob/master/hooks/nvidia

This can be used with:

  lxc.hook.mount = /usr/share/lxc/hooks/nvidia

https://github.com/NVIDIA/nvidia-container-runtime has details for the
environment variables.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd-3.2 predefined keys

2018-07-21 Thread Stéphane Graber
On Sat, Jul 21, 2018 at 03:26:14PM -0700, Mike Wright wrote:
> Hi all,
> 
> I'm trying to learn how to use lxd so I installed the lxd-3.2 snap and zfs.
> 
> note: I'm new to lxd, zfs, and snaps (but familiar with lxc).
> 
> Using zpool and zfs I created a zpool and a dataset, "ZFS/lxd".
> 
> I think I can use that dataset as the backing store for a lxd pool:
> 
> lxc storage create lxd zfs [key=value...]
> 
> but I have no idea what the key names are for "mount point", "backing store
> for the pool", etc.  In fact, without the key names, I can't inspect any of
> the settings except via show and info.  (I did say I'm unfamiliar with lxd
> ;D

http://lxd.readthedocs.io/en/latest/storage/

> Where can I find a list of the predefined keys used by the lxc commands?
> 
> I've been spinning my wheels for at least a week and getting nowhere, so
> great thanks for any help or pointers,
> 
> Mike Wright

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc 2.0 => 3.0 and ubuntu.common.conf (and lxc-templates dpkg)

2018-07-20 Thread Stéphane Graber
ntu.common.conf
> > 
> > But the containers still fail afterwards because the include is not
> > satisfied.  (They fail, that is, until "lxc-templates" is installed).
> > 
> > (Assume here that a common way of producing an upgraded host system,
> > e.g. Ubuntu 18.04 versus 16.04, is to produce a new minimal 18.04
> > system and then apply all recorded "apt install" commands corresponding
> > to actual desired packages (yes, modifications are sometimes needed,
> > but...).  The old 16.04 system need not have explicitly done "apt
> > install lxc-templates" and so that could be overlooked in 18.04).
> > 
> > Since "lxc" is now branded as a transitional package => lxc-utils,
> > could "lxc" have lxc-templates as a dependency, even though lxc-utils
> > will not?
> > 
> > 
> > Adrian Pepper
> > arpep...@uwaterloo.ca
> > 
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users



-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc 2.0.10?

2018-07-06 Thread Stéphane Graber
On Fri, Jul 06, 2018 at 01:21:48PM +0200, Harald Dunkel wrote:
> Hi folks,
> 
> I see tons of interesting new code on the stable-2.0 branch,
> esp. cgfsng. What is your plan here? Will this be tagged in
> the near future to get a new snapshot 2.0.10?
> 
> 
> Regards
> Harri

Hi,

Yes, we will be tagging a new set of 2.0.x point releases for LXC, LXD
and LXCFS in the near future. This is currently held up a bit as we're
still dealing with people upgrading to 3.0.x and finding new issues
there as well as a rather big backlog in the LXD 2.0.x branch (got to
review several hundred patches before it's ready for a point release).

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question : duplication

2018-06-08 Thread Stéphane Graber
On Fri, Jun 08, 2018 at 12:13:34PM +0100, Thouraya TH wrote:
> Hi,
> 
> In my cluster, i have 3 containers per host  and i have 10 hosts.
> all containers are ubuntu containers.
> My question: the rootfs of these containers are the same ?
> i have duplicated files in different containers repository ?
> 
> 
> 
> Thank you so much for answers.
> Best regards.

Depends on your storage backend, all backends except the directory one
will use copy-on-write for deltas from the image used for the container
and their current state.

If using ZFS and have quite a bit of spare RAM you can also turn on
deduplication on your ZPOOL which will then deduplicate writes as they
happen, possibly saving you a lot of disk space.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXCFS installation effects

2018-06-05 Thread Stéphane Graber
On Tue, Jun 05, 2018 at 03:19:08PM -0700, Martín Fernández wrote:
> Awesome!
> 
> You mean overriding my current `lxc.include` or adding an additional 
> `lxc.include` ? Not sure if lxc supports multiple includes.


Adding a separate line with the second include, you can have as many
lxc.include in your config as you want.

> 
> Sorry for the delay!
> 
> Best,
> Martín
> 
> On Tue, Jun 05, 2018 at 4:38 PM "Stéphane Graber" < ">"Stéphane Graber" > 
> wrote:
> 
> > 
> > 
> > 
> > Ah, that is missing a bit that I'd have expected common.conf to contain.
> > 
> > Can you try adding this to your container's config:
> > 
> > lxc.include = /usr/share/lxc/config/common.conf.d/00-lxcfs.conf
> > 
> > 
> > 
> > On Tue, Jun 05, 2018 at 12:29:39PM -0700, Martín Fernández wrote:
> > > Content of the omitted include:
> > >
> > > # Default pivot location
> > > lxc.pivotdir = lxc_putold
> > >
> > > # Default mount entries
> > > lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
> > > lxc.mount.entry = sysfs sys sysfs defaults 0 0
> > > lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none
> > bind,optional 0 0
> > > lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional
> > 0 0
> > > lxc.mount.entry = /sys/kernel/security sys/kernel/security none
> > bind,optional 0 0
> > > lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0
> > >
> > > # Default console settings
> > > lxc.devttydir = lxc
> > > lxc.tty = 4
> > > lxc.pts = 1024
> > >
> > > # Default capabilities
> > > lxc.cap.drop = sys_module mac_admin mac_override sys_time
> > >
> > > # When using LXC with apparmor, the container will be confined by
> > default.
> > > # If you wish for it to instead run unconfined, copy the following line
> > > # (uncommented) to the container's configuration file.
> > > #lxc.aa_profile = unconfined
> > >
> > > # To support container nesting on an Ubuntu host while retaining most of
> > 
> > > # apparmor's added security, use the following two lines instead.
> > > #lxc.aa_profile = lxc-container-default-with-nesting
> > > #lxc.mount.auto = cgroup:mixed
> > >
> > > # Uncomment the following line to autodetect squid-deb-proxy
> > configuration on the
> > > # host and forward it to the guest at start time.
> > > #lxc.hook.pre-start = /usr/share/lxc/hooks/squid-deb-proxy-client
> > >
> > > # If you wish to allow mounting block filesystems, then use the
> > following
> > > # line instead, and make sure to grant access to the block device and/or
> > loop
> > > # devices below in lxc.cgroup.devices.allow.
> > > #lxc.aa_profile = lxc-container-default-with-mounting
> > >
> > > # Default cgroup limits
> > > lxc.cgroup.devices.deny = a
> > > ## Allow any mknod (but not using the node)
> > > lxc.cgroup.devices.allow = c *:* m
> > > lxc.cgroup.devices.allow = b *:* m
> > > ## /dev/null and zero
> > > lxc.cgroup.devices.allow = c 1:3 rwm
> > > lxc.cgroup.devices.allow = c 1:5 rwm
> > > ## consoles
> > > lxc.cgroup.devices.allow = c 5:0 rwm
> > > lxc.cgroup.devices.allow = c 5:1 rwm
> > > ## /dev/{,u}random
> > > lxc.cgroup.devices.allow = c 1:8 rwm
> > > lxc.cgroup.devices.allow = c 1:9 rwm
> > > ## /dev/pts/*
> > > lxc.cgroup.devices.allow = c 5:2 rwm
> > > lxc.cgroup.devices.allow = c 136:* rwm
> > > ## rtc
> > > lxc.cgroup.devices.allow = c 254:0 rm
> > > ## fuse
> > > lxc.cgroup.devices.allow = c 10:229 rwm
> > > ## tun
> > > lxc.cgroup.devices.allow = c 10:200 rwm
> > > ## full
> > > lxc.cgroup.devices.allow = c 1:7 rwm
> > > ## hpet
> > > lxc.cgroup.devices.allow = c 10:228 rwm
> > > ## kvm
> > > lxc.cgroup.devices.allow = c 10:232 rwm
> > > ## To use loop devices, copy the following line to the container's
> > > ## configuration file (uncommented).
> > > #lxc.cgroup.devices.allow = b 7:* rwm
> > >
> > > # Blacklist some syscalls which are not safe in privileged
> > > # containers
> > > lxc.seccomp = /usr/share/lxc/config/common.seccomp
> > >
> > > Martín
> > >
> > > On Tue, Jun 05, 2018 at 4:28 PM fmarti...@gmail.com < fmarti...@gmail.com
> > > wrote:
>

Re: [lxc-users] LXCFS installation effects

2018-06-05 Thread Stéphane Graber
Ah, that is missing a bit that I'd have expected common.conf to contain.

Can you try adding this to your container's config:

lxc.include = /usr/share/lxc/config/common.conf.d/00-lxcfs.conf

On Tue, Jun 05, 2018 at 12:29:39PM -0700, Martín Fernández wrote:
> Content of the omitted include:
> 
> # Default pivot location
> lxc.pivotdir = lxc_putold
> 
> # Default mount entries
> lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
> lxc.mount.entry = sysfs sys sysfs defaults 0 0
> lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none 
> bind,optional 0 0
> lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional 0 0
> lxc.mount.entry = /sys/kernel/security sys/kernel/security none bind,optional 
> 0 0
> lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0
> 
> # Default console settings
> lxc.devttydir = lxc
> lxc.tty = 4
> lxc.pts = 1024
> 
> # Default capabilities
> lxc.cap.drop = sys_module mac_admin mac_override sys_time
> 
> # When using LXC with apparmor, the container will be confined by default.
> # If you wish for it to instead run unconfined, copy the following line
> # (uncommented) to the container's configuration file.
> #lxc.aa_profile = unconfined
> 
> # To support container nesting on an Ubuntu host while retaining most of
> # apparmor's added security, use the following two lines instead.
> #lxc.aa_profile = lxc-container-default-with-nesting
> #lxc.mount.auto = cgroup:mixed
> 
> # Uncomment the following line to autodetect squid-deb-proxy configuration on 
> the
> # host and forward it to the guest at start time.
> #lxc.hook.pre-start = /usr/share/lxc/hooks/squid-deb-proxy-client
> 
> # If you wish to allow mounting block filesystems, then use the following
> # line instead, and make sure to grant access to the block device and/or loop
> # devices below in lxc.cgroup.devices.allow.
> #lxc.aa_profile = lxc-container-default-with-mounting
> 
> # Default cgroup limits
> lxc.cgroup.devices.deny = a
> ## Allow any mknod (but not using the node)
> lxc.cgroup.devices.allow = c *:* m
> lxc.cgroup.devices.allow = b *:* m
> ## /dev/null and zero
> lxc.cgroup.devices.allow = c 1:3 rwm
> lxc.cgroup.devices.allow = c 1:5 rwm
> ## consoles
> lxc.cgroup.devices.allow = c 5:0 rwm
> lxc.cgroup.devices.allow = c 5:1 rwm
> ## /dev/{,u}random
> lxc.cgroup.devices.allow = c 1:8 rwm
> lxc.cgroup.devices.allow = c 1:9 rwm
> ## /dev/pts/*
> lxc.cgroup.devices.allow = c 5:2 rwm
> lxc.cgroup.devices.allow = c 136:* rwm
> ## rtc
> lxc.cgroup.devices.allow = c 254:0 rm
> ## fuse
> lxc.cgroup.devices.allow = c 10:229 rwm
> ## tun
> lxc.cgroup.devices.allow = c 10:200 rwm
> ## full
> lxc.cgroup.devices.allow = c 1:7 rwm
> ## hpet
> lxc.cgroup.devices.allow = c 10:228 rwm
> ## kvm
> lxc.cgroup.devices.allow = c 10:232 rwm
> ## To use loop devices, copy the following line to the container's
> ## configuration file (uncommented).
> #lxc.cgroup.devices.allow = b 7:* rwm
> 
> # Blacklist some syscalls which are not safe in privileged
> # containers
> lxc.seccomp = /usr/share/lxc/config/common.seccomp
> 
> Martín
> 
> On Tue, Jun 05, 2018 at 4:28 PM fmarti...@gmail.com < fmarti...@gmail.com > 
> wrote:
> 
> > 
> > 
> > I omitted this line that is probably important!
> > 
> > 
> > # Common configuration
> > lxc.include = /usr/share/lxc/config/ubuntu.common.conf
> > 
> > 
> > Best,
> > Martín
> > 
> > On Tue, Jun 05, 2018 at 4:24 PM "Stéphane Graber" < ">"Stéphane Graber" >
> > wrote:
> > 
> > 
> >> 
> >> 
> >> Is that all you have or is there some lines before that?
> >> 
> >> 
> >> 
> >> On Tue, Jun 05, 2018 at 12:16:48PM -0700, Martín Fernández wrote:
> >> > Stéphane,
> >> >
> >> > I think this could be the issue in the configuration:
> >> >
> >> > ```
> >> > # Container specific configuration
> >> > lxc.rootfs = /dev/Main/app1-dev
> >> > lxc.mount = /var/lib/lxc/app1-dev/fstab
> >> > lxc.utsname = app1-dev
> >> > lxc.arch = amd64
> >> > ```
> >> >
> >> > Best,
> >> > Martín
> >> >
> >> > On Tue, Jun 05, 2018 at 4:14 PM "Stéphane Graber" < ">"Stéphane Graber"
> >> > wrote:
> >> >
> >> > >
> >> > >
> >> > >
> >> > > /var/lib/lxc/ /config for the container you're testing things with.
> >> > >
> >&g

Re: [lxc-users] LXCFS installation effects

2018-06-05 Thread Stéphane Graber
Is that all you have or is there some lines before that?

On Tue, Jun 05, 2018 at 12:16:48PM -0700, Martín Fernández wrote:
> Stéphane,
> 
> I think this could be the issue in the configuration:
> 
> ```
> # Container specific configuration
> lxc.rootfs = /dev/Main/app1-dev
> lxc.mount = /var/lib/lxc/app1-dev/fstab
> lxc.utsname = app1-dev
> lxc.arch = amd64
> ```
> 
> Best,
> Martín
> 
> On Tue, Jun 05, 2018 at 4:14 PM "Stéphane Graber" < ">"Stéphane Graber" > 
> wrote:
> 
> > 
> > 
> > 
> > /var/lib/lxc/ /config for the container you're testing things with.
> > 
> > 
> > 
> > 
> > On Tue, Jun 05, 2018 at 12:09:52PM -0700, Martín Fernández wrote:
> > > Stéphane,
> > >
> > > Not sure what configuration file you are talking about. Configuration
> > file under /etc/lxc/default.conf looks like this:
> > >
> > > ```
> > > lxc.network.type = veth
> > > lxc.network.link ( http://lxc.network.link ) = br0
> > > lxc.network.flags = up
> > > lxc.network.hwaddr = X
> > > ```
> > >
> > > Any lxc-* command that I could use to introspect the containers and get
> > more information to troubleshoot ?
> > >
> > > Thanks again!
> > >
> > > Best,
> > > Martín
> > >
> > > On Tue, Jun 05, 2018 at 4:05 PM "Stéphane Graber" < ">"Stéphane Graber"
> > > wrote:
> > >
> > > >
> > > >
> > > >
> > > > What's your container's config like?
> > > >
> > > > I wonder if it's somehow missing the include (usually indirect through
> > 
> > > > common.conf) that's needed for the lxcfs hook.
> > > >
> > > >
> > > >
> > > > On Tue, Jun 05, 2018 at 11:57:39AM -0700, Martín Fernández wrote:
> > > > > Stéphane,
> > > > >
> > > > > `grep lxcfs /proc/1/mountinfo` doesn’t return any output.
> > > > >
> > > > > On the other hand,  /var/lib/lxcfs/ shows `cgroup` and `proc`
> > folders
> > > > with multiple files.
> > > > >
> > > > > Best,
> > > > > Martín
> > > > >
> > > > > On Tue, Jun 05, 2018 at 3:54 PM "Stéphane Graber" < ">"Stéphane
> > Graber"
> > > > > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > What do you see if you run "grep lxcfs /proc/1/mountinfo" inside
> > the
> > > > > > container?
> > > > > >
> > > > > > And do you see the lxcfs tree at /var/lib/lxcfs/ on the host?
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tue, Jun 05, 2018 at 11:50:51AM -0700, Martín Fernández wrote:
> > > > > > > Stéphane,
> > > > > > >
> > > > > > > I just got time to do my work on lxcfs. Installed lxcfs running
> > on a
> > > >
> > > > > > Ubuntu 14.04 box, installed version is 2.0.8. 
> > > > > > >
> > > > > > > I restarted one of our containers and “I think” I see wrong
> > output
> > > > when
> > > > > > running `free` for example. 
> > > > > > >
> > > > > > > lxc-info shows 1GB of memory usage and `free` shows 24GB of
> > memory
> > > > usage
> > > > > > which is the same as the host memory usage. Anything I could be
> > > > missing ?
> > > > > > >
> > > > > > > Short version of the process done would be:
> > > > > > >
> > > > > > > - apt-get install lxcfs
> > > > > > > - sudo init 0 (in container)
> > > > > > > - lxc-start -n container-name -d 
> > > > > > >
> > > > > > > Best,
> > > > > > > Martín
> > > > > > >
> > > > > > > On Thu, May 31, 2018 at 12:39 AM "Stéphane Graber" < ">"Stéphane
> > 
> > > > Graber"
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On We

Re: [lxc-users] LXCFS installation effects

2018-06-05 Thread Stéphane Graber
/var/lib/lxc//config for the container you're testing things with.


On Tue, Jun 05, 2018 at 12:09:52PM -0700, Martín Fernández wrote:
> Stéphane,
> 
> Not sure what configuration file you are talking about. Configuration file 
> under /etc/lxc/default.conf looks like this:
> 
> ```
> lxc.network.type = veth
> lxc.network.link = br0
> lxc.network.flags = up
> lxc.network.hwaddr = X
> ```
> 
> Any lxc-* command that I could use to introspect the containers and get more 
> information to troubleshoot ?
> 
> Thanks again!
> 
> Best,
> Martín
> 
> On Tue, Jun 05, 2018 at 4:05 PM "Stéphane Graber" < ">"Stéphane Graber" > 
> wrote:
> 
> > 
> > 
> > 
> > What's your container's config like?
> > 
> > I wonder if it's somehow missing the include (usually indirect through
> > common.conf) that's needed for the lxcfs hook.
> > 
> > 
> > 
> > On Tue, Jun 05, 2018 at 11:57:39AM -0700, Martín Fernández wrote:
> > > Stéphane,
> > >
> > > `grep lxcfs /proc/1/mountinfo` doesn’t return any output.
> > >
> > > On the other hand,  /var/lib/lxcfs/ shows `cgroup` and `proc` folders
> > with multiple files.
> > >
> > > Best,
> > > Martín
> > >
> > > On Tue, Jun 05, 2018 at 3:54 PM "Stéphane Graber" < ">"Stéphane Graber"
> > > wrote:
> > >
> > > >
> > > >
> > > >
> > > > What do you see if you run "grep lxcfs /proc/1/mountinfo" inside the
> > > > container?
> > > >
> > > > And do you see the lxcfs tree at /var/lib/lxcfs/ on the host?
> > > >
> > > >
> > > >
> > > > On Tue, Jun 05, 2018 at 11:50:51AM -0700, Martín Fernández wrote:
> > > > > Stéphane,
> > > > >
> > > > > I just got time to do my work on lxcfs. Installed lxcfs running on a
> > 
> > > > Ubuntu 14.04 box, installed version is 2.0.8. 
> > > > >
> > > > > I restarted one of our containers and “I think” I see wrong output
> > when
> > > > running `free` for example. 
> > > > >
> > > > > lxc-info shows 1GB of memory usage and `free` shows 24GB of memory
> > usage
> > > > which is the same as the host memory usage. Anything I could be
> > missing ?
> > > > >
> > > > > Short version of the process done would be:
> > > > >
> > > > > - apt-get install lxcfs
> > > > > - sudo init 0 (in container)
> > > > > - lxc-start -n container-name -d 
> > > > >
> > > > > Best,
> > > > > Martín
> > > > >
> > > > > On Thu, May 31, 2018 at 12:39 AM "Stéphane Graber" < ">"Stéphane
> > Graber"
> > > > > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, May 30, 2018 at 07:16:04PM -0700, Martín Fernández wrote:
> > > > > > > Stéphane,
> > > > > > >
> > > > > > > Thank you very much for the quick reply!
> > > > > > >
> > > > > > > What are you are saying is pretty awesome! That would make it
> > super
> > > > easy
> > > > > > to start using it. Is there any constraint in terms of what
> > versions
> > > > of
> > > > > > LXC are supported ? I can run LXCFS with LXC 1.0.10 ? 
> > > > > >
> > > > > > 1.0.10 should be fine though we certainly don't have all that many
> > 
> > > > users
> > > > > > of that release now that it's two LTS ago :)
> > > > > >
> > > > > > In any case, it'll be safe to install LXCFS, then create a test
> > > > > > container, confirm it behaves and if it does then start restarting
> > 
> > > > your
> > > > > > existing containers, if it doesn't, let us know and we'll try to
> > > > figure
> > > > > > out why.
> > > > > >
> > > > > > > In order to understand a little bit more about how LXCFS works,
> > does
> > > >
> > > > > > LXCFS hook into LXC starting process and mount /proc/* files ?
> > > > > >
> > > > > > That's correct, LXCFS when installed will create a tr

Re: [lxc-users] LXCFS installation effects

2018-06-05 Thread Stéphane Graber
What's your container's config like?

I wonder if it's somehow missing the include (usually indirect through
common.conf) that's needed for the lxcfs hook.

On Tue, Jun 05, 2018 at 11:57:39AM -0700, Martín Fernández wrote:
> Stéphane,
> 
> `grep lxcfs /proc/1/mountinfo` doesn’t return any output.
> 
> On the other hand,  /var/lib/lxcfs/ shows `cgroup` and `proc` folders with 
> multiple files.
> 
> Best,
> Martín
> 
> On Tue, Jun 05, 2018 at 3:54 PM "Stéphane Graber" < ">"Stéphane Graber" > 
> wrote:
> 
> > 
> > 
> > 
> > What do you see if you run "grep lxcfs /proc/1/mountinfo" inside the
> > container?
> > 
> > And do you see the lxcfs tree at /var/lib/lxcfs/ on the host?
> > 
> > 
> > 
> > On Tue, Jun 05, 2018 at 11:50:51AM -0700, Martín Fernández wrote:
> > > Stéphane,
> > >
> > > I just got time to do my work on lxcfs. Installed lxcfs running on a
> > Ubuntu 14.04 box, installed version is 2.0.8. 
> > >
> > > I restarted one of our containers and “I think” I see wrong output when
> > running `free` for example. 
> > >
> > > lxc-info shows 1GB of memory usage and `free` shows 24GB of memory usage
> > which is the same as the host memory usage. Anything I could be missing ?
> > >
> > > Short version of the process done would be:
> > >
> > > - apt-get install lxcfs
> > > - sudo init 0 (in container)
> > > - lxc-start -n container-name -d 
> > >
> > > Best,
> > > Martín
> > >
> > > On Thu, May 31, 2018 at 12:39 AM "Stéphane Graber" < ">"Stéphane Graber"
> > > wrote:
> > >
> > > >
> > > >
> > > >
> > > > On Wed, May 30, 2018 at 07:16:04PM -0700, Martín Fernández wrote:
> > > > > Stéphane,
> > > > >
> > > > > Thank you very much for the quick reply!
> > > > >
> > > > > What are you are saying is pretty awesome! That would make it super
> > easy
> > > > to start using it. Is there any constraint in terms of what versions
> > of
> > > > LXC are supported ? I can run LXCFS with LXC 1.0.10 ? 
> > > >
> > > > 1.0.10 should be fine though we certainly don't have all that many
> > users
> > > > of that release now that it's two LTS ago :)
> > > >
> > > > In any case, it'll be safe to install LXCFS, then create a test
> > > > container, confirm it behaves and if it does then start restarting
> > your
> > > > existing containers, if it doesn't, let us know and we'll try to
> > figure
> > > > out why.
> > > >
> > > > > In order to understand a little bit more about how LXCFS works, does
> > 
> > > > LXCFS hook into LXC starting process and mount /proc/* files ?
> > > >
> > > > That's correct, LXCFS when installed will create a tree at
> > > > /var/lib/lxcfs those files then get bind-mounted on top of the
> > > > containers /proc/* files through a LXC startup hook.
> > > >
> > > > > Thank you very much again!
> > > > >
> > > > > Best,
> > > > > Martín
> > > > >
> > > > > On Wed, May 30, 2018 at 10:52 PM "Stéphane Graber" < ">"Stéphane
> > Graber"
> > > > > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > ___
> > > > > > lxc-users mailing list
> > > > > > lxc-users@lists.linuxcontainers.org
> > > > > > http://lists.linuxcontainers.org/listinfo/lxc-users
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, May 30, 2018 at 05:08:59PM -0700, Martín Fernández wrote:
> > > > > > > Hello,
> > > > > > >
> > > > > > > We are using LXC to virtualize containers in multiple of our
> > hosts.
> > > > We
> > > > > > have been running with LXC for a while now. 
> > > > > > >
> > > > > > > We started adding monitoring tools to our systems and found the
> > > > known
> > > > > > issue that LXC containers show the host information on
> > /proc/meminfo
> > > > and
> > > > > > /pro

Re: [lxc-users] LXCFS installation effects

2018-06-05 Thread Stéphane Graber
What do you see if you run "grep lxcfs /proc/1/mountinfo" inside the container?

And do you see the lxcfs tree at /var/lib/lxcfs/ on the host?

On Tue, Jun 05, 2018 at 11:50:51AM -0700, Martín Fernández wrote:
> Stéphane,
> 
> I just got time to do my work on lxcfs. Installed lxcfs running on a Ubuntu 
> 14.04 box, installed version is 2.0.8. 
> 
> I restarted one of our containers and “I think” I see wrong output when 
> running `free` for example. 
> 
> lxc-info shows 1GB of memory usage and `free` shows 24GB of memory usage 
> which is the same as the host memory usage. Anything I could be missing ?
> 
> Short version of the process done would be:
> 
> - apt-get install lxcfs
> - sudo init 0 (in container)
> - lxc-start -n container-name -d 
> 
> Best,
> Martín
> 
> On Thu, May 31, 2018 at 12:39 AM "Stéphane Graber" < ">"Stéphane Graber" > 
> wrote:
> 
> > 
> > 
> > 
> > On Wed, May 30, 2018 at 07:16:04PM -0700, Martín Fernández wrote:
> > > Stéphane,
> > >
> > > Thank you very much for the quick reply!
> > >
> > > What are you are saying is pretty awesome! That would make it super easy
> > to start using it. Is there any constraint in terms of what versions of
> > LXC are supported ? I can run LXCFS with LXC 1.0.10 ? 
> > 
> > 1.0.10 should be fine though we certainly don't have all that many users
> > of that release now that it's two LTS ago :)
> > 
> > In any case, it'll be safe to install LXCFS, then create a test
> > container, confirm it behaves and if it does then start restarting your
> > existing containers, if it doesn't, let us know and we'll try to figure
> > out why.
> > 
> > > In order to understand a little bit more about how LXCFS works, does
> > LXCFS hook into LXC starting process and mount /proc/* files ?
> > 
> > That's correct, LXCFS when installed will create a tree at
> > /var/lib/lxcfs those files then get bind-mounted on top of the
> > containers /proc/* files through a LXC startup hook.
> > 
> > > Thank you very much again!
> > >
> > > Best,
> > > Martín
> > >
> > > On Wed, May 30, 2018 at 10:52 PM "Stéphane Graber" < ">"Stéphane Graber"
> > > wrote:
> > >
> > > >
> > > >
> > > >
> > > > ___
> > > > lxc-users mailing list
> > > > lxc-users@lists.linuxcontainers.org
> > > > http://lists.linuxcontainers.org/listinfo/lxc-users
> > > >
> > > >
> > > >
> > > > On Wed, May 30, 2018 at 05:08:59PM -0700, Martín Fernández wrote:
> > > > > Hello,
> > > > >
> > > > > We are using LXC to virtualize containers in multiple of our hosts.
> > We
> > > > have been running with LXC for a while now. 
> > > > >
> > > > > We started adding monitoring tools to our systems and found the
> > known
> > > > issue that LXC containers show the host information on /proc/meminfo
> > and
> > > > /proc/cpuinfo.  
> > > > >
> > > > > I found that LXCFS solves the problems mentioned above. What would
> > be
> > > > required to setup LXCFS in my hosts ? Would I need to reboot all the
> > > > containers ? Do I need to restore my containers filesystem ? Is there
> > any
> > > > guide/documentation around it ?
> > > > >
> > > > > Thanks before hand!
> > > > >
> > > > > Best,
> > > > > Martín
> > > >
> > > > Hey there,
> > > >
> > > > You should just need to install lxcfs and then any container you start
> > 
> > > > or restart will be using it. There's no way to set it up against a
> > > > running container, but there's also no need to restart all your
> > > > containers immediately, you can slowly roll it out if that helps.
> > > >
> > > > And no changes needed to the containers, it gets setup automatically
> > > > through a lxc hook when the container starts.
> > > >
> > > >
> > > > --
> > > > Stéphane Graber
> > > > Ubuntu developer
> > > > http://www.ubuntu.com
> > > >
> > > >
> > > >
> > 
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> > 
> > 
> >

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXCFS installation effects

2018-05-30 Thread Stéphane Graber
On Wed, May 30, 2018 at 07:16:04PM -0700, Martín Fernández wrote:
> Stéphane,
> 
> Thank you very much for the quick reply!
> 
> What are you are saying is pretty awesome! That would make it super easy to 
> start using it. Is there any constraint in terms of what versions of LXC are 
> supported ? I can run LXCFS with LXC 1.0.10 ? 

1.0.10 should be fine though we certainly don't have all that many users
of that release now that it's two LTS ago :)

In any case, it'll be safe to install LXCFS, then create a test
container, confirm it behaves and if it does then start restarting your
existing containers, if it doesn't, let us know and we'll try to figure
out why.

> In order to understand a little bit more about how LXCFS works, does LXCFS 
> hook into LXC starting process and mount /proc/* files ?

That's correct, LXCFS when installed will create a tree at
/var/lib/lxcfs those files then get bind-mounted on top of the
containers /proc/* files through a LXC startup hook.

> Thank you very much again!
> 
> Best,
> Martín
> 
> On Wed, May 30, 2018 at 10:52 PM "Stéphane Graber" < ">"Stéphane Graber" > 
> wrote:
> 
> > 
> > 
> > 
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> > 
> > 
> > 
> > On Wed, May 30, 2018 at 05:08:59PM -0700, Martín Fernández wrote:
> > > Hello,
> > >
> > > We are using LXC to virtualize containers in multiple of our hosts. We
> > have been running with LXC for a while now. 
> > >
> > > We started adding monitoring tools to our systems and found the known
> > issue that LXC containers show the host information on /proc/meminfo and
> > /proc/cpuinfo.  
> > >
> > > I found that LXCFS solves the problems mentioned above. What would be
> > required to setup LXCFS in my hosts ? Would I need to reboot all the
> > containers ? Do I need to restore my containers filesystem ? Is there any
> > guide/documentation around it ?
> > >
> > > Thanks before hand!
> > >
> > > Best,
> > > Martín
> > 
> > Hey there,
> > 
> > You should just need to install lxcfs and then any container you start
> > or restart will be using it. There's no way to set it up against a
> > running container, but there's also no need to restart all your
> > containers immediately, you can slowly roll it out if that helps.
> > 
> > And no changes needed to the containers, it gets setup automatically
> > through a lxc hook when the container starts.
> > 
> > 
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> > 
> > 
> >

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXCFS installation effects

2018-05-30 Thread Stéphane Graber
On Wed, May 30, 2018 at 05:08:59PM -0700, Martín Fernández wrote:
> Hello,
> 
> We are using LXC to virtualize containers in multiple of our hosts. We have 
> been running with LXC for a while now. 
> 
> We started adding monitoring tools to our systems and found the known issue 
> that LXC containers show the host information on /proc/meminfo and 
> /proc/cpuinfo.  
> 
> I found that LXCFS solves the problems mentioned above. What would be 
> required to setup LXCFS in my hosts ? Would I need to reboot all the 
> containers ? Do I need to restore my containers filesystem ? Is there any 
> guide/documentation around it ?
> 
> Thanks before hand!
> 
> Best,
> Martín

Hey there,

You should just need to install lxcfs and then any container you start
or restart will be using it. There's no way to set it up against a
running container, but there's also no need to restart all your
containers immediately, you can slowly roll it out if that helps.

And no changes needed to the containers, it gets setup automatically
through a lxc hook when the container starts.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Clarifiaction on LXD DEB packages

2018-05-28 Thread Stéphane Graber
On Mon, May 28, 2018 at 04:27:11PM +0200, Andreas Kirbach wrote:
> Hi everyone,
> 
> there is a note about DEB package availaibilty on
> https://discuss.linuxcontainers.org/t/lxd-3-1-has-been-released/1787
> 
> ---
> LXD 3.1 will only be made available as a snap package. We will not be
> uploading it as a deb to Ubuntu 18.10 or through backports to previous
> releases. Moving forward all feature releases of LXD will only be
> available through the snap.
> ---
> 
> Does that mean there won't be any deb packages for LXD in the future or
> does this only affect feature releases, eg. a future LTS release (4.0?)
> would again be made available as deb?

For Ubuntu this means that starting with LXD 3.1, distribution of LXD
will only happen as a snap, we will not be packaging any new release as
debs.

So LXD 2.0.x (Ubuntu 16.04) and LXD 3.0.x (Ubuntu 18.04) are the only
ones that will keep seeing .deb packages uploaded.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] 16.04 -> 18.04 Upgrade problems

2018-04-21 Thread Stéphane Graber
On Sat, Apr 21, 2018 at 03:24:52PM -0400, Pete Osborne wrote:
> Hi Folks,
> 
> I decided to upgrade my 16.04 LTS system to 18.04 LTS today and when
> upgrading LXD I'm seeing the following:
> 
> Setting up lxd (3.0.0-0ubuntu4) ...
> Old bridge configuration detected in /etc/default/lxd-bridge, upgrading
> Unsetting deprecated profile options
> Error: The following containers failed to update (profile change still
> saved):
>  - xps-container1: Failed to load raw.lxc
> 
> dpkg: error processing package lxd (--configure):
>  installed lxd package post-installation script subprocess returned error
> exit status 1
> Errors were encountered while processing:
>  lxd
> E: Sub-process /usr/bin/dpkg returned an error code (1)
> 
> 
> Can someone point me in a direction to remediate the issue,
> 
> Thanks,
> Pete

This suggests that your xps-container1 container or one of your profiles
is using raw.lxc to directly set LXC configuration keys. One of those
configuration key must be deprecated or renamed in liblxc 3.0 causing
the problem.

To fix this, you should edit your raw.lxc to use the appropriate liblxc
config key and then re-run the upgrade.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Hotplugging devices (USB Z-Wave controller) with path value

2018-04-20 Thread Stéphane Graber
On Wed, Apr 18, 2018 at 03:41:04PM -0400, Lai Wei-Hwa wrote:
> To add another thing, even though I've removed the device (lxc config device 
> remove hass Z-Wave), it is still seen in the container: 
> 
> lai@hass:~$ lsusb 
> Bus 002 Device 003: ID 0624:0249 Avocent Corp. Virtual Keyboard/Mouse 
> Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
> Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> Bus 005 Device 002: ID 0624:0248 Avocent Corp. Virtual Hub 
> Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> Bus 001 Device 002: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub 
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
> Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> Bus 003 Device 002: ID 0658:0200 Sigma Designs, Inc. 
> Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub 
> 
> Thanks! 
> Lai 
> 
> 
> 
> I was trying to setup hotplugging a USB into an LXD container. I need the 
> path, in the container, to be /dev/ttyACM0. How can I do this? 
> 
> lai@host:~$ lxc config device add hass Z-Wave unix-char vendorid=0658 
> productid=0200 path=/dev/ttyACM0 
> Error: Invalid device configuration key for unix-char: productid 

lxc config device add hass Z-Wave unix-char path=/dev/ttyACM0

If you have multiple devices that may end up at that path, you should
instead use something like:

lxc config device add hass Z-Wave unix-char 
source=/dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A60337Y1-if00-port0 
path=/dev/ttyACM0

This will ensure that the device at /dev/ttyACM0 in the container is always the 
same one.


I do this in my openhab container here where I have the following devices:
  usb-alarm:
gid: "111"
path: /dev/ttyUSB1
source: /dev/serial/by-id/usb-FTDI_FT230X_Basic_UART_DQ00AXEP-if00-port0
type: unix-char
uid: "0"
  usb-insteon:
gid: "111"
path: /dev/ttyUSB0
source: /dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A60337Y1-if00-port0
type: unix-char
uid: "0"
  usb-z-wave:
gid: "111"
path: /dev/ttyACM0
source: /dev/serial/by-id/usb-0658_0200-if00
type: unix-char
uid: "0"


> 
> lai@host:~$ lxc config device add hass Z-Wave usb vendorid=0658 
> productid=0200 path=/dev/ttyACM0 
> Error: Invalid device configuration key for usb: path 
> 
> 
> ___ 
> lxc-users mailing list 
> lxc-users@lists.linuxcontainers.org 
> http://lists.linuxcontainers.org/listinfo/lxc-users 

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Announcing LXC, LXD and LXCFS 3.0 LTS

2018-04-11 Thread Stéphane Graber
The LXC, LXD and LXCFS teams are proud to announce the release of the
3.0 version of all three projects.

This is an LTS release which comes with 5 years of upstream support for
bugfixes and security updates, in this case until June 2023.

We'd like to note that despite the major version bump, no APIs have been
broken and both liblxc and the LXD REST API remain backward compatible.

Our existing 2.0 LTS will now be getting one last major bugfix update
over the next month or so before going into a much slower maintenance
phase where only critical bugfixes, special requests and security fixes
get backported.


LXD highlights (since LXD 2.21):

 - Support for clustering LXD servers together
 - New tool to convert existing systems into containers
 - Support for NVIDIA runtime & library passthrough
 - Hotplug support for unix-char and unix-block devices
 - Local copy/move of storage volumes
 - Remote transfer of storage volumes
 - A new `proxy` device type to forward TCP connections
 - Event based notification in the /dev/lxd API
 - Replaced our command line parser
 - New lifecycle events for tracking/auditing

LXC highlights (since LXC 2.1):

 - Introduction of some new configuration keys (see list in announcement)
 - Improved CLI
 - Support for CGroupV2
 - Template scripts have been moved out of tree
 - Support for using images generated by our new `distrobuilder` tool
 - Support for using OCI images for application containers
 - Ring buffer based console logging
 - Argument filtering in Seccomp policies
 - Daemonized application containers
 - Deprecation and rename of some config keys (`lxc-update-config` to convert)
 - Removal of legacy CGroup drivers (cgmanager & cgfs in favor of cgfsng)
 - Removal of the aufs storage backend (use overlayfs)

LXCFS highlights (since LXCFS 2.0):

 - libpam-cgfs has been moved over to LXC


The announcements for the 3 projects can be found here:

 - LXD 3.0: 
https://discuss.linuxcontainers.org/t/lxd-3-0-0-has-been-released/1491
 - LXC 3.0: 
https://discuss.linuxcontainers.org/t/lxc-3-0-0-has-been-released/1449
 - LXCFS 3.0: 
https://discuss.linuxcontainers.org/t/lxcfs-3-0-0-has-been-released/1440


We'd like to thank all of our contributors and our amazing community for
their contributions, bug reports and help testing those releases!

On behalf of the LXC, LXD and LXCFS teams,

Stéphane Graber


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] deb -> snap usage issues

2018-03-30 Thread Stéphane Graber
On Fri, Mar 30, 2018 at 11:46:10PM +0900, Tomasz Chmielewski wrote:
> I've noticed that cronjobs using "lxc" commands don't work on servers with
> LXD installed from a snap package.
> 
> It turned out that:
> 
> # which lxc
> /snap/bin/lxc
> 
> 
> Which is not in a $PATH when executing via cron.
> 
> In other words - lxc binary is in a $PATH when installed from a deb package,
> but is not when installed from a snap package.
> 
> 
> Is it a bug, a feature?

I've noticed that before too, in some cases the PATH doesn't match the
normla interactive PATH... I'd say this is a bug and should be reported
against snapd as that's what updates your PATH with snaps.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-27 Thread Stéphane Graber
Yes

On Tue, Mar 27, 2018 at 08:45:03PM +0200, Michel Jansens wrote:
> Hi Stéphane,
> 
> Does this means LXD 3.0 will be part of Ubuntu 18.04 next month?
> 
> Cheers,
> Michel
> > On 27 Mar 2018, at 19:44, Stéphane Graber <stgra...@ubuntu.com> wrote:
> > 
> > We normally release a new feature release every month and have been
> > doing so until the end of December where we've then turned our focus on
> > our next Long Term Support release, LXD 3.0 which is due out later this
> > week.
> 

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD project status

2018-03-27 Thread Stéphane Graber
On Tue, Mar 27, 2018 at 12:19:55PM -0500, Steven Spencer wrote:
> This is probably a message that Stephane Graber can answer most
> effectively, but I just want to know that the LXD project is continuing to
> move forward. Our organization did extensive testing of LXD in 2016 and
> some follow-up research in 2017 and plan to utilize this as our
> virtualization solution starting in April of this year. In 2017, there were
> updates to LXD at least once a month, but the news has been very quiet
> since December.
> 
> To properly test LXD as it would work in our environment, we did extensive
> lab work with it back in 2016 and some follow-up testing in 2017. While we
> realize that there are no guarantees in our industry, I'd just like to know
> that, at least for now, LXD is still a viable project and that development
> hasn't suddenly come to a screeching halt.
> 
> Thanks for your consideration.
> 
> Steve Spencer

We normally release a new feature release every month and have been
doing so until the end of December where we've then turned our focus on
our next Long Term Support release, LXD 3.0 which is due out later this
week.

So It's actually been much more busy since December than it was before,
as we included a number of major features such as LXD clustering.

To track project activity, you may be interested in reading our weekly
status updates:

 https://discuss.linuxcontainers.org/tags/weekly


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is it possible to use realtime processing in a lxd container ?

2018-03-21 Thread Stéphane Graber
A privileged container migth be able to set some of those scheduling
flags, an unprivileged container will not be able to for sure.

On Wed, Mar 21, 2018 at 11:41:52PM +0100, Pierre Couderc wrote:
> No answer, it is  it is not possible ?
> 
> Please explain me..
> 
> 
> On 03/14/2018 04:46 PM, Pierre Couderc wrote:
> > When I try to start freeswitcth in freeswitch.service with :
> > 
> > IOSchedulingClass=realtime
> > 
> > it fails, but seems to start when I comment it.
> > 
> > So my question is : is it possible ?
> > 
> > And if yes, how to parametrize the container ?
> > 
> > Is there some howto ?
> > 
> > Thank you in advance
> > 
> > Pierre Couderc
> > 
> > 
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] On The Way To LXC 3.0: Splitting Out Templates And Language Bindings

2018-02-28 Thread Stéphane Graber
Hi,

Current plan is to have beta1 versions of the 3.0 version of all
projects at some point tomorrow.

Right now, we already have those tags in python3-lxc and lxc-templates.

Moving forward, we'd certainly like for most distributions currently
supported through the template shell scripts to be integrated into
distrobuilder, offering a much nicer way for users to roll their own
images. We expect this to take some time though which is why the
lxc-templates repository exists for LXC 3.0 so that we don't regress in
features.

As for contributing to distrobuilder, it may be slightly early for that,
though you can certainly look at it already and play with it. It's still
evolving pretty quicky and we have a plan to move our top most used
images (on https://images.linuxcontainers.org) to being built by it over
the next month or so.

Stéphane

On Thu, Mar 01, 2018 at 07:05:09AM +0100, Geaaru wrote:
> Hi,
> 
> I'm maintainer of lxc-sabayon template. For clarification, lxc-templates
> project will be used for legacy script correct? What is dead line?
> When will be release lxd 3.0 and new lxc release to permit test of new
> features on a fixed tag?
> I see that lxc now support oci format from docker but for sabayon as
> visible on lxc-sabayon script is needed apply some changes. So, the right
> way is integrate creation of lxc sabayon image inside distrobuilder project?
> 
> Thanks in advance
> 
> On Feb 28, 2018 11:28 AM, "Christian Brauner" <christian.brau...@mailbox.org>
> wrote:
> 
> > Hey everyone,
> >
> > This is another update about the LXC 3.0 development. Instead of copying
> > and pasting what I wrote on my blog here I'm going to be lazy and please
> > ask you to read:
> >
> > https://brauner.github.io/2018/02/27/lxc-removes-legacy-
> > template-build-system.html
> >
> > This should give you an idea what we are planning to do with the
> > language bindings and the template build system. And if that makes you
> > more likely to read it: there are asciicasts. :)
> >
> > Thanks everyone!
> > Christian
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users

> _______
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Samba4 DC in an unprivileged container

2018-02-07 Thread Stéphane Graber
On Wed, Feb 07, 2018 at 06:28:50PM +0300, Andrey Repin wrote:
> Greetings, Frank Dornheim!
> 
> > im trying to setup a Samba4 AD in a unprivileged container:
> >  
> >  
> >  
> > My OS is a ubuntu 17.10 server an my container is a ubuntu 17.10.
> >  
> > My lxd version is:
> >  
> >  Package: lxd 
> >  Version: 2.18-0ubuntu6
> 
> > First, I have a working setup as a "privileged container".
> >  
> > But I want to secure my installation and transfer samba4 in an unprivileged 
> > container.
> 
> Unprivileged containers are no more secure than privileged containers,
> generally speaking.

Hmm, what?

A privileged container has uid 0 in the container be uid 0 at the kernel level.
An unprivileged container has uid 0 in the container mapped to uid
10 at the kernel level.

Unprivileged containers are MASSIVELY more secure than privileged containers.
There are numerous ways to escape a privileged container which just down
to the fact that you are running with full kernel privileges and so
entirely rely on things like capabilities and LSMs to protect your
system.

Unprivileged containers on the other hand are safe by-design. An attack
which would allow root in an unprivileged container to escape to the
host, would also be a user to root privilege escalation but for every
normal Linux systems. There are some of those every so often, they are
critical kernel security bugs and they do get fixed very quickly.

Unprivileged containers do not need a perfectly configured seccomp,
apparmor, capabilities set or cgroups to be safe, all of those are
merely extra safety nets in case the main privilege enforcement (user
namespace) fails due to a critical kernel security bug.

> > I get the lower error message when I do the setup with samba-tool domain 
> > provision.
> 
> Can you post your smb.conf before provisioning?
> 
> 
> -- 
> With best regards,
> Andrey Repin
> Wednesday, February 7, 2018 18:26:59
> 
> Sorry for my terrible english...
> 
> _______
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-02-02 Thread Stéphane Graber
On Fri, Feb 02, 2018 at 10:53:02AM +0100, Harald Dunkel wrote:
> Hi Stéphane,
> 
> On 01/30/18 17:17, Stéphane Graber wrote:
> > 
> > Yeah, there's effectively no way to re-inject those mounts inside a
> > running container.
> > 
> > So you're going to need to restart those containers.
> > Until then, you can "umount" the various lxcfs files from within the
> > container so that rather than a complete failure to access those files,
> > you just get the non-namespaced version of the file.
> > 
> 
> AFAICS lxcfs is useful only for unprivileged containers. All my affected
> containers were privileged. I didn't ask for lxcfs, but it was used
> automatically, so I wonder how I can forbid lxcfs to be used for these
> containers? Do I have to deinstall lxcfs completely?

lxcfs is used for both privileged and unprivileged containers, without
it you'd see the host uptime, host set of CPUs, host memory, ...

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Getting GID, UID of container process from container host

2018-01-30 Thread Stéphane Graber
On Tue, Jan 30, 2018 at 10:19:12PM +0530, Shailendra Rana wrote:
> Hi,
> 
> Is there a way we can get the PID/GID/UID of a container process using
> the host  PID/GID/UID of that container process ? Basically mapping of
> host PID/GID/UID to container PID/GID/UID.
> 
> Thanks,
> Shailendra

It's technically doable, yes, but not particularly enjoyable :)

stgraber@castiana:~$ ls -lh /proc/ | grep 8261
dr-xr-xr-x  9  100  1000 Jan 30 15:33 8261
stgraber@castiana:~$ cat /proc/8261/status | grep -i ns
NStgid: 82611
NSpid:  82611
NSpgid: 82611
NSsid:  82611
stgraber@castiana:~$ cat /proc/8261/uid_map 
 0100 10
stgraber@castiana:~$ cat /proc/8261/gid_map 
 0100 10


In this case, host PID 8261 is PID 1 in the container as can be found in
the status file. For the owner, you need to read the uid and gid map,
then do the math.

In this case, the map says that uid 0 in the container is uid 100 on
the host. The gid map is the same, so that means that this process is
running as uid=0 gid=0 in the container.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxcfs removed by accident, how to recover?

2018-01-30 Thread Stéphane Graber
On Tue, Jan 30, 2018 at 09:34:59PM +0700, Fajar A. Nugraha wrote:
> On Tue, Jan 30, 2018 at 7:34 PM, Harald Dunkel <harald.dun...@aixigo.de> 
> wrote:
> > Hi folks,
> >
> > I have removed the lxcfs package by accident, while the containers
> > are still running.
> 
> > Is there some way to recover without restaring the containers?
> 
> I'm pretty sure the answer is "no". Even lxcfs package no longer
> automatically restart itself during upgrade.
> 
> -- 
> Fajar

Yeah, there's effectively no way to re-inject those mounts inside a
running container.

So you're going to need to restart those containers.
Until then, you can "umount" the various lxcfs files from within the
container so that rather than a complete failure to access those files,
you just get the non-namespaced version of the file.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Freeswitch refuses to start in Debian Jessie

2018-01-07 Thread Stéphane Graber
On Sun, Jan 07, 2018 at 08:46:40PM -0500, Stéphane Graber wrote:
> Anything weird going on with your /var/run?
> 
> Looks like what's failing is:
> 
> /bin/mkdir -p /var/run/freeswitch/
> 
> On Sun, Jan 07, 2018 at 10:35:20AM -0600, Rajil Saraswat wrote:
> > Hello,
> > 
> > I am trying to setup freeswitch in Debian Jessie. The host is Gentoo
> > linux running lxc-2.0.9
> > 
> > Error is,
> > 
> > # systemctl status freeswitch.service
> > ● freeswitch.service - freeswitch
> >    Loaded: loaded (/lib/systemd/system/freeswitch.service; enabled)
> >    Active: failed (Result: start-limit) since Sun 2018-01-07 16:28:38
> > UTC; 1s ago
> >   Process: 7900 ExecStartPre=/bin/mkdir -p /var/run/freeswitch/
> > (code=exited, status=214/SETSCHEDULER)
> > 
> > Jan 07 16:28:38 jessie systemd[1]: Failed to start freeswitch.
> > Jan 07 16:28:38 jessie systemd[1]: Unit freeswitch.service entered
> > failed state.
> > Jan 07 16:28:38 jessie systemd[1]: freeswitch.service holdoff time over,
> > scheduling restart.
> > Jan 07 16:28:38 jessie systemd[1]: Stopping freeswitch...
> > Jan 07 16:28:38 jessie systemd[1]: Starting freeswitch...
> > Jan 07 16:28:38 jessie systemd[1]: freeswitch.service start request
> > repeated too quickly, refusing to start.
> > Jan 07 16:28:38 jessie systemd[1]: Failed to start freeswitch.
> > Jan 07 16:28:38 jessie systemd[1]: Unit freeswitch.service entered
> > failed state.
> > 
> > #journalctl -xe
> > 
> > Jan 07 16:28:38 jessie systemd[1]: Starting freeswitch...
> > -- Subject: Unit freeswitch.service has begun with start-up
> > -- Defined-By: systemd
> > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> > -- 
> > -- Unit freeswitch.service has begun starting up.
> > Jan 07 16:28:38 jessie systemd[7900]: Failed at step SETSCHEDULER
> > spawning /bin/mkdir: Operation not permitted
> > -- Subject: Process /bin/mkdir could not be executed
> > -- Defined-By: systemd
> > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> > -- 
> > -- The process /bin/mkdir could not be executed and failed.
> > -- 
> > -- The error number returned while executing this process is 1.
> > Jan 07 16:28:38 jessie systemd[1]: freeswitch.service: control process
> > exited, code=exited status=214
> > Jan 07 16:28:38 jessie systemd[1]: Failed to start freeswitch.
> > -- Subject: Unit freeswitch.service has failed
> > -- Defined-By: systemd
> > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> > -- 
> > -- Unit freeswitch.service has failed.
> > -- 
> > -- The result is failed.
> > 
> > The startup script is,
> > 
> > 
> > # cat /lib/systemd/system/freeswitch.service
> > 
> > [Unit]
> > Description=freeswitch
> > After=syslog.target network.target local-fs.target postgresql.service
> > 
> > [Service]
> > ; service
> > Type=forking
> > PIDFile=/run/freeswitch/freeswitch.pid
> > Environment="DAEMON_OPTS=-nonat"
> > EnvironmentFile=-/etc/default/freeswitch
> > ExecStartPre=/bin/mkdir -p /var/run/freeswitch/
> > ExecStartPre=/bin/chown -R www-data:www-data /var/run/freeswitch/
> > ExecStart=/usr/bin/freeswitch -u www-data -g www-data -ncwait $DAEMON_OPTS
> > TimeoutSec=45s
> > Restart=always
> > ; exec
> > User=root
> > Group=daemon
> > LimitCORE=infinity
> > LimitNOFILE=10
> > LimitNPROC=6
> > LimitSTACK=25
> > LimitRTPRIO=infinity
> > LimitRTTIME=infinity

> > IOSchedulingClass=realtime
> > IOSchedulingPriority=2
> > CPUSchedulingPolicy=rr
> > CPUSchedulingPriority=89

Given that it's reporting a scheduler problem, it's likely that one (or more)
of the 4 keys above at the problem as some of those actions won't be
allowed in a container.

You could edit the unit and comment those, then run "systemctl daemon-reload"
and try starting the service again.

If that does the trick, then you should be able to make this cleaner by
using a systemd unit override file instead.



> > UMask=0007
> > 
> > [Install]
> > WantedBy=multi-user.target
> > 
> > Any idea how to fix this?
> > 
> > Thanks
> > 
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> -- 
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com



> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Freeswitch refuses to start in Debian Jessie

2018-01-07 Thread Stéphane Graber
Anything weird going on with your /var/run?

Looks like what's failing is:

/bin/mkdir -p /var/run/freeswitch/

On Sun, Jan 07, 2018 at 10:35:20AM -0600, Rajil Saraswat wrote:
> Hello,
> 
> I am trying to setup freeswitch in Debian Jessie. The host is Gentoo
> linux running lxc-2.0.9
> 
> Error is,
> 
> # systemctl status freeswitch.service
> ● freeswitch.service - freeswitch
>    Loaded: loaded (/lib/systemd/system/freeswitch.service; enabled)
>    Active: failed (Result: start-limit) since Sun 2018-01-07 16:28:38
> UTC; 1s ago
>   Process: 7900 ExecStartPre=/bin/mkdir -p /var/run/freeswitch/
> (code=exited, status=214/SETSCHEDULER)
> 
> Jan 07 16:28:38 jessie systemd[1]: Failed to start freeswitch.
> Jan 07 16:28:38 jessie systemd[1]: Unit freeswitch.service entered
> failed state.
> Jan 07 16:28:38 jessie systemd[1]: freeswitch.service holdoff time over,
> scheduling restart.
> Jan 07 16:28:38 jessie systemd[1]: Stopping freeswitch...
> Jan 07 16:28:38 jessie systemd[1]: Starting freeswitch...
> Jan 07 16:28:38 jessie systemd[1]: freeswitch.service start request
> repeated too quickly, refusing to start.
> Jan 07 16:28:38 jessie systemd[1]: Failed to start freeswitch.
> Jan 07 16:28:38 jessie systemd[1]: Unit freeswitch.service entered
> failed state.
> 
> #journalctl -xe
> 
> Jan 07 16:28:38 jessie systemd[1]: Starting freeswitch...
> -- Subject: Unit freeswitch.service has begun with start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit freeswitch.service has begun starting up.
> Jan 07 16:28:38 jessie systemd[7900]: Failed at step SETSCHEDULER
> spawning /bin/mkdir: Operation not permitted
> -- Subject: Process /bin/mkdir could not be executed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- The process /bin/mkdir could not be executed and failed.
> -- 
> -- The error number returned while executing this process is 1.
> Jan 07 16:28:38 jessie systemd[1]: freeswitch.service: control process
> exited, code=exited status=214
> Jan 07 16:28:38 jessie systemd[1]: Failed to start freeswitch.
> -- Subject: Unit freeswitch.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit freeswitch.service has failed.
> -- 
> -- The result is failed.
> 
> The startup script is,
> 
> 
> # cat /lib/systemd/system/freeswitch.service
> 
> [Unit]
> Description=freeswitch
> After=syslog.target network.target local-fs.target postgresql.service
> 
> [Service]
> ; service
> Type=forking
> PIDFile=/run/freeswitch/freeswitch.pid
> Environment="DAEMON_OPTS=-nonat"
> EnvironmentFile=-/etc/default/freeswitch
> ExecStartPre=/bin/mkdir -p /var/run/freeswitch/
> ExecStartPre=/bin/chown -R www-data:www-data /var/run/freeswitch/
> ExecStart=/usr/bin/freeswitch -u www-data -g www-data -ncwait $DAEMON_OPTS
> TimeoutSec=45s
> Restart=always
> ; exec
> User=root
> Group=daemon
> LimitCORE=infinity
> LimitNOFILE=10
> LimitNPROC=6
> LimitSTACK=25
> LimitRTPRIO=infinity
> LimitRTTIME=infinity
> IOSchedulingClass=realtime
> IOSchedulingPriority=2
> CPUSchedulingPolicy=rr
> CPUSchedulingPriority=89
> UMask=0007
> 
> [Install]
> WantedBy=multi-user.target
> 
> Any idea how to fix this?
> 
> Thanks
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trouble with automounting /dev/shm

2017-11-24 Thread Stéphane Graber
Yes, that works fine.

You can set it with: printf line1\nline2 | lxc config set CONTAINER raw.lxc -

Or through a multi-line yaml entry and "lxc config edit"

On Fri, Nov 24, 2017 at 11:59:21PM +0100, Pavol Cupka wrote:
> can you have multiline raw.lxc ?
> 
> On Fri, Nov 24, 2017 at 11:07 PM, Stéphane Graber <stgra...@ubuntu.com> wrote:
> > On Fri, Nov 24, 2017 at 11:01:33PM +0100, Pavol Cupka wrote:
> >> Going to answer my own question.
> >>
> >> It was due to another profile being applied to the container also with
> >> raw.lxc option so it hijacked the option and the option with dev/shm
> >> never got applied.
> >>
> >> So works without the second profile.
> >>
> >> Subquestion: How to add another raw.lxc line to a second profile
> >> without canceling the first one?
> >
> > You can't. As far as LXD is concerned raw.lxc is a single string
> > configuration key, whichever profile or container local config contains
> > it last will win.
> >
> >>
> >> Thanks
> >>
> >> On Wed, Nov 22, 2017 at 10:18 AM, Pavol Cupka <pavol.cu...@gmail.com> 
> >> wrote:
> >> > Hello list,
> >> >
> >> > I am running
> >> >   LXD 2.18
> >> >   LXC 2.0.8
> >> >   LXCFS 2.0.6
> >> >
> >> > on a non-systemd host, trying to run a non-systemd container. All
> >> > works well, except the /dev/shm doesn't get mounted. I copied a line
> >> > from lxc template and added this line as raw.lxc config value for the
> >> > container, but I doesn't work.
> >> >
> >> > This is the config line being added through profile
> >> > config:
> >> >
> >> > raw.lxc: lxc.mount.entry = none dev/shm tmpfs
> >> > rw,nosuid,nodev,create=dir
> >> >
> >> > The container starts, but the /dev/shm doesn't get mounted.
> >> >
> >> > Am I doing something wrong?
> >> >
> >> > Thanks for help.
> >> >
> >> > Pavol
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

  1   2   3   4   >