Re: [lxc-users] library start() API from process name having spaces

2018-07-05 Thread Tycho Andersen
On Thu, Jul 05, 2018 at 03:42:55PM +0200, Christian Brauner wrote:
> On Wed, Jul 04, 2018 at 03:23:19PM -0400, Chaetoo3 wrote:
> > Hi!
> > 
> > I try to use the liblxc start() API from my process and it worked, but
> > I notice it would not set the process name for the container process
> > as it would do from the lxc-start command line.  Instead the container
> > process stayed with the name of my caller.
> 
> You mean it wouldn't change from "my binary --with --args" to "[lxc monitor]?
> 
> > 
> > I sniff around and make a local fix.  I'm sorry I do not know how to
> > contribute this to lxc, but if anyone want to do that, here is the
> > code.  Reason is: /proc/self/stat contains the process name in
> > parentheses.  If the name contains space, when liblxc parse the file
> > it would get off by one field, and get confused and fail in
> > setproctitle().
> > 
> > I don't know if my fix is robust enough for real, but at least should
> > point someone to the right place?  This looks for trailing ") "
> > sequence (parent and space).  That is still subject to errors because
> > the process name might include such, but maybe there is not a way to
> > avoid a false positive sometimes.  You cannot paren match either,
> > because a process may not have matching.  Well anyway, this at least
> > account for spaces in process name, which is a little bit common
> > situation I think.
> > 
> > I hope it would help someone.
> > 
> > 
> > @@ -296,10 +296,23 @@ int setproctitle(char *title)
> > return -1;
> > }
> >  
> > -   /* Skip the first 25 fields, column 26-28 are start_code, end_code,
> > +/* Find the end of the process name, which is bounded by 
> > parentheses.
> > + * Some processes may include parens in their name, which in the 
> > worst
> > + * case can trigger a false positive no matter what we do here.  
> > This
> > + * tries to avoid that by looking for a ') ' sequence.
> > + */
> 
> Interesting. Ccing Mr. Andersen in case he has any opinions or wants to
> contribute a patch.

I suppose we could read /proc/self/status, find what's after the Name:
field, and then skip that many characters? I probably won't have time
to look at a patch this week, though.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to apply commands in howtos - macvlan and disk passthrough

2016-12-20 Thread Tycho Andersen
Hi John,

On Tue, Dec 20, 2016 at 10:39:07PM +0100, john.gub...@web.de wrote:
>Hello,
> 
>I have a directory on my host system and want to create several containers
>with the same users inside. I would like to pass the directory through to
>each container and allow the users to write and read on it. The network
>connection should be done using macvlan.
>The howtos I have read so far show how to set up lxd, which works very
>well on my 16.04 host. Starting a container works out of the box as
>unpriviliged user as well.
> 
>My questions:
>Is it even possible to share one directory on the host with several
>container?
>All the howtos I could find mention some commands, that need to be
>applied, but they do not tell me about the commands I need to type in to
>make it work:
> 
> 
>"That means you can create a container with the following configuration:
> 
>lxc.id_map = u 0 10 65536
> 
>  lxc.id_map = g 0 10 65536"
> 
>There is a big list of possible options on github, but where does it tell
>how to apply them?
> 
>Does someone know a detailed howto, that describes a similiar setup like
>mine?

http://tycho.ws/blog/2016/12/uidmap.html is a blog post I wrote a
while ago talking about how to set this up with your home directory.
You can mimic the settings for whatever user map you want, though.

Cheers,

Tycho

>Every time I read something, I feel like missing something important,
>because I could not find a coherent compendium of possible options on how
>to do something.
> 
>kind regards,
>John

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] do_dump: 866 dump failed with 1

2016-09-08 Thread Tycho Andersen
On Wed, Aug 17, 2016 at 05:09:34PM -0400, Connor Zanin wrote:
> Hi all,
> 
> Environment:
> ubuntu server 16.04
> kernel 4.4
> both lxc and criu packages downloaded from ubuntu repos
> 
> I am trying to checkpoint a privileged container. After many hours of
> banging my head against the internet, I humbly turn to the mailing list.
> 
> Thanks for any and all help!

Can you post the logs? There should be a dump.log in the directory you
chose to put the images in. The LXC log may also be instructive.

Tycho

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd in Debian

2016-08-23 Thread Tycho Andersen
On Tue, Aug 23, 2016 at 08:40:33PM +, P. Lowe wrote:
> For socket activation of the LXD daemon or socket activation of a container?

For socket activation of the LXD daemon,

https://github.com/lxc/lxd/blob/master/lxd/daemon.go#L866

Tycho

> -P. Lowe
> 
> Quoting Tycho Andersen <tycho.ander...@canonical.com>:
> 
> >On Tue, Aug 23, 2016 at 04:56:43PM +, P. Lowe wrote:
> >>
> >>Why on earth does lxd depend on "golang-github-coreos-go-systemd-dev"?
> >>
> >>I'm also wondering, why should lxd even depend on systemd?
> >
> >LXD has the capability to be socket activated, this library implements
> >a go API for handling the case when it is socket activated.
> >
> >Tycho
> >
> >>-P. Lowe
> >>
> >>Quoting "Fajar A. Nugraha" <l...@fajar.net>:
> >>
> >>>On Tue, Aug 23, 2016 at 3:28 PM, Micky Del Favero <mi...@mesina.net> wrote:
> >>>>Paul Dino Jones <spacefreak18@brak.space> writes:
> >>>>
> >>>>>So, i see lxc 2.0 has made it's way into Stretch and Jessie backports,
> >>>>>but I don't see any activity on lxd. Is this going to happen in time
> >>>>>for the Stretch freeze?
> >>>>
> >>>>I've packaged LXD for Jessie (Devuan's, but the same applied to Debian),
> >>>>here I've explain what I've do:
> >>>>http://micky.it/log/compiling-lxd-on-devuan.html
> >>>>https://lists.linuxcontainers.org/pipermail/lxc-users/2016-July/012045.html
> >>>>if nobody will package LXD you can do it yourself follow my way.
> >>>
> >>>
> >>>I'm confused.
> >>>
> >>>How did you managed to get it build, when the source from
> >>>http://packages.ubuntu.com/xenial-updates/lxd has
> >>>
> >>>Build-Depends: debhelper (>= 9),
> >>>   dh-apparmor,
> >>>   dh-golang,
> >>>   dh-systemd,
> >>>   golang-go,
> >>>   golang-go.crypto-dev,
> >>>   golang-context-dev,
> >>>   golang-github-coreos-go-systemd-dev,
> >>>   golang-github-gorilla-mux-dev,
> >>>   golang-github-gosexy-gettext-dev,
> >>>   golang-github-mattn-go-colorable-dev,
> >>>   golang-github-mattn-go-sqlite3-dev,
> >>>   golang-github-olekukonko-tablewriter-dev,
> >>>   golang-github-pborman-uuid-dev,
> >>>   golang-gocapability-dev,
> >>>   golang-gopkg-flosch-pongo2.v3-dev,
> >>>   golang-gopkg-inconshreveable-log15.v2-dev,
> >>>   golang-gopkg-lxc-go-lxc.v2-dev,
> >>>   golang-gopkg-tomb.v2-dev,
> >>>   golang-goprotobuf-dev,
> >>>   golang-petname-dev,
> >>>   golang-yaml.v2-dev,
> >>>   golang-websocket-dev,
> >>>   help2man,
> >>>   lxc-dev (>= 1.1.0~),
> >>>   pkg-config,
> >>>   protobuf-compiler
> >>>
> >>>and https://packages.debian.org/petname returns zero result?
> >>>altlinux's lxd rpm (which I use as starting point for my c6 build) has
> >>>similar requirement, and when I tried removing golang-petname-dev
> >>>requirement when building for centos, the build failed, so I had to
> >>>create a new rpm package for that.
> >>>
> >>>--
> >>>Fajar
> >>>___
> >>>lxc-users mailing list
> >>>lxc-users@lists.linuxcontainers.org
> >>>http://lists.linuxcontainers.org/listinfo/lxc-users
> >>
> >>
> >>
> >>
> >>___
> >>lxc-users mailing list
> >>lxc-users@lists.linuxcontainers.org
> >>http://lists.linuxcontainers.org/listinfo/lxc-users
> >___
> >lxc-users mailing list
> >lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd in Debian

2016-08-23 Thread Tycho Andersen
On Tue, Aug 23, 2016 at 04:56:43PM +, P. Lowe wrote:
> 
> Why on earth does lxd depend on "golang-github-coreos-go-systemd-dev"?
> 
> I'm also wondering, why should lxd even depend on systemd?

LXD has the capability to be socket activated, this library implements
a go API for handling the case when it is socket activated.

Tycho

> -P. Lowe
> 
> Quoting "Fajar A. Nugraha" :
> 
> >On Tue, Aug 23, 2016 at 3:28 PM, Micky Del Favero  wrote:
> >>Paul Dino Jones  writes:
> >>
> >>>So, i see lxc 2.0 has made it's way into Stretch and Jessie backports,
> >>>but I don't see any activity on lxd. Is this going to happen in time
> >>>for the Stretch freeze?
> >>
> >>I've packaged LXD for Jessie (Devuan's, but the same applied to Debian),
> >>here I've explain what I've do:
> >>http://micky.it/log/compiling-lxd-on-devuan.html
> >>https://lists.linuxcontainers.org/pipermail/lxc-users/2016-July/012045.html
> >>if nobody will package LXD you can do it yourself follow my way.
> >
> >
> >I'm confused.
> >
> >How did you managed to get it build, when the source from
> >http://packages.ubuntu.com/xenial-updates/lxd has
> >
> >Build-Depends: debhelper (>= 9),
> >   dh-apparmor,
> >   dh-golang,
> >   dh-systemd,
> >   golang-go,
> >   golang-go.crypto-dev,
> >   golang-context-dev,
> >   golang-github-coreos-go-systemd-dev,
> >   golang-github-gorilla-mux-dev,
> >   golang-github-gosexy-gettext-dev,
> >   golang-github-mattn-go-colorable-dev,
> >   golang-github-mattn-go-sqlite3-dev,
> >   golang-github-olekukonko-tablewriter-dev,
> >   golang-github-pborman-uuid-dev,
> >   golang-gocapability-dev,
> >   golang-gopkg-flosch-pongo2.v3-dev,
> >   golang-gopkg-inconshreveable-log15.v2-dev,
> >   golang-gopkg-lxc-go-lxc.v2-dev,
> >   golang-gopkg-tomb.v2-dev,
> >   golang-goprotobuf-dev,
> >   golang-petname-dev,
> >   golang-yaml.v2-dev,
> >   golang-websocket-dev,
> >   help2man,
> >   lxc-dev (>= 1.1.0~),
> >   pkg-config,
> >   protobuf-compiler
> >
> >and https://packages.debian.org/petname returns zero result?
> >altlinux's lxd rpm (which I use as starting point for my c6 build) has
> >similar requirement, and when I tried removing golang-petname-dev
> >requirement when building for centos, the build failed, so I had to
> >create a new rpm package for that.
> >
> >--
> >Fajar
> >___
> >lxc-users mailing list
> >lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> 
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] undefined symbol: current_config on custom-compiled lxc2

2016-08-01 Thread Tycho Andersen
On Mon, Jul 18, 2016 at 04:17:48PM +0300, Nikolay Borisov wrote:
> Hello List,
> 
> 
> So I tried compiling boh lxc 2.0 from github as well as the 2.0.3 stable
> package from the web page. Everything went fine:
> 
> 
> Environment:
>  - compiler: gcc
>  - distribution: centos
>  - init script type(s): sysvinit
>  - rpath: no
>  - GnuTLS: no
>  - Bash integration: yes
> 
> Security features:
>  - Apparmor: no
>  - Linux capabilities: yes
>  - seccomp: yes
>  - SELinux: yes
>  - cgmanager: no
> 
> Bindings:
>  - lua: yes
>  - python3: yes
> 
> Documentation:
>  - examples: yes
>  - API documentation: yes
>  - user documentation: yes
> 
> Debugging:
>  - tests: no
>  - mutex debugging: no
> 
> Paths:
>  - Logs in configpath: no
> 
> 
> However, when I try running lxc-create or lxc-start I get the following
> error: lxc-start: symbol lookup error: lxc-start: undefined symbol:
> current_config. Ldd on the lxc-ls binary shows that all libraries are
> resolved. This is on centos 6.7 box with 4.4 kernel. Any ideas?

What is the actual output of ldd? I suspect you're picking up the
system's liblxc with your custom compiled lxc binaries, which if the
system's liblxc is old enough will cause problems.

ldconfig -v may shed some light.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD lxc start

2016-08-01 Thread Tycho Andersen
On Sat, Jul 30, 2016 at 05:12:46PM +0200, Goran Brkuljan wrote:
> Hi,
> 
> I am suddenly missing lxdbr0, and I am getting errror while starting lxc
> container.

What's the output of `journalctl -u lxd-bridge`?

Tycho

> lxc start app01
> error: Error calling 'lxd forkstart app01 /var/lib/lxd/containers
> /var/log/lxd/app01/lxc.conf': err='exit status 1'
> Try `lxc info --show-log app01` for more info
> 
> Also when I try '*sudo dpkg-reconfigure -p medium lxd*' lxd bridge is not
> created.
> 
> Lxd log in attachment.
> 
> Regards,
> 
> Goran

> lxc info --show-log app01,
> 
> Name: app01
> Architecture: x86_64
> Created: 2016/07/10 14:13 UTC
> Status: Stopped
> Type: persistent
> Profiles: default
> 
> Log:
> 
> lxc 20160730170007.032 INFO lxc_start - 
> start.c:lxc_check_inherited:251 - closed inherited fd 3
> lxc 20160730170007.032 INFO lxc_start - 
> start.c:lxc_check_inherited:251 - closed inherited fd 7
> lxc 20160730170007.034 INFO lxc_container - 
> lxccontainer.c:do_lxcapi_start:797 - Attempting to set proc title to [lxc 
> monitor] /var/lib/lxd/containers app01
> lxc 20160730170007.034 INFO lxc_start - 
> start.c:lxc_check_inherited:251 - closed inherited fd 7
> lxc 20160730170007.034 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
> LSM security driver AppArmor
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment 
> this to allow umount -f;  not recommended.
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount 
> action 0
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force 
> umounts
> 
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount 
> action 0
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force 
> umounts
> 
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .[all].
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for kexec_load action 
> 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for kexec_load action 
> 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .open_by_handle_at errno 1.
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for open_by_handle_at 
> action 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for open_by_handle_at 
> action 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .init_module errno 1.
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for init_module action 
> 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for init_module action 
> 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .finit_module errno 1.
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for finit_module action 
> 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for finit_module action 
> 327681
> lxc 20160730170007.034 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .delete_module errno 1.
> lxc 20160730170007.035 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for delete_module action 
> 327681
> lxc 20160730170007.035 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 
> 327681
> lxc 20160730170007.035 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the 
> main one
> lxc 20160730170007.035 INFO lxc_conf - 
> conf.c:run_script_argv:367 - Executing script '/usr/bin/lxd callhook 
> /var/lib/lxd 165 start' for container 'app01', config section 'lxc'
> lxc 20160730170007.035 INFO lxc_start - 
> start.c:lxc_check_inherited:251 - closed inherited fd 3
> lxc 20160730170007.035 INFO lxc_start - 
> start.c:lxc_check_inherited:251 - closed inherited fd 7
>   

Re: [lxc-users] how to determine if in LXD

2016-08-01 Thread Tycho Andersen
On Mon, Aug 01, 2016 at 04:01:00PM +0200, tapczan wrote:
> Hello
> 
> There is an easy way to determine if I'm in LXC, content of file
> /proc/self/cgroup shows path with /lxc, eg:
> 
> 2:cpu:/lxc/host
> 
> However in LXD this rule is no longer valid:
> 
> 2:cpu:/
> 
> It looks like real host from that point of view.
> 
> So tools like chef OHAI have issue in determining virtualisation role
> and status.
> 
> Is there as easy way to determine if I'm inside LXD container?

Try `systemd-detect-virt` on systemd-based distros, or
`running-in-container` on upstart-based distros.

Tycho

> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the state of the art for migrating unprivilegied containers ?

2016-07-19 Thread Tycho Andersen
On Mon, Jun 27, 2016 at 09:24:40AM +0200, Pierre Couderc wrote:
> I understand that live migration is not stable.
> I have seen a note to migrate from one user to another user inside a host :
> http://unix.stackexchange.com/questions/156477/migrate-an-unprivileged-lxc-container-between-users
> But is it possible to migrate stopped containers (host under jessie) from
> one host to another one ?

For lxd, it is simply `lxc copy`. For the legacy lxc-* tools, you're
sort of on your own (i.e. you can use filesystem primitives to move
things around, but there's no built in support for it).

Tycho

> Thanks
> PC
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] [lxd] autofs

2016-07-04 Thread Tycho Andersen
On Mon, Jul 04, 2016 at 12:20:30PM +0200, Rémy Dernat wrote:
> Ok, I will answer to myself, my container was not running priviledged.
> 
> It is now working fine in priviledged container.
> 
> However, I am quite interesting in doing such a thing in an unpriviledged
> container. I tried:
> 
> (my profile is 'vlan' because I also need some NAT stuff).
> 
> echo Y | sudo tee /sys/module/fuse/parameters/userns_mounts
> echo Y | sudo tee /sys/module/ext4/parameters/userns_mounts
> lxc profile set vlan raw.lxc lxc.aa_profile=unconfined
> lxc profile device add vlan autofs unix-char path=/dev/autofs
> lxc profile device add vlan fuse unix-char path=/dev/fuse
> lxc profile device add vlan loop0 unix-block path=/dev/loop0
> lxc profile apply my-container vlan
> lxc restart my-container
> 
> 
> My apparmor from host is:
> 
> cat /etc/apparmor.d/lxc/lxc-default-with-mounting
> # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
> # will source all profiles under /etc/apparmor.d/lxc
> 
> profile lxc-container-default-with-mounting
> flags=(attach_disconnected,mediate_deleted) {
>   #include 
> 
> # allow standard blockdevtypes.
> # The concern here is in-kernel superblock parsers bringing down the
> # host with bad data.  However, we continue to disallow proc, sys,
> securityfs,
> # etc to nonstandard locations.
>   mount fstype=ext*,
>   mount fstype=xfs,
>   mount fstype=nfs,
>   mount fstype=nfs4,
>   mount fstype=rpc_pipefs,
>   mount fstype=autofs,
>   mount fstype=btrfs,
>   mount options=(rw, bind),
> }
> 
> 
> Although I think this is not needed as I already wrote:
> lxc profile set vlan raw.lxc lxc.aa_profile=unconfined
> 
> I restarted both lxd and apparmor without success.
> 
> It seems that the only way to do it is a nested container or a priviledged
> one.

The kernel refuses to let non-root mount a large majority of
filesystems; ext4 and the proc filesystems and such are special
exceptions, not the rule.

> CHeers,
> 
> Rémy
> 
> 2016-07-04 10:28 GMT+02:00 Rémy Dernat <remy...@gmail.com>:
> 
> > Hi Tycho,
> >
> > It is launched from root, so, I supposed that is my container is
> > priviledged. Here is the content of my
> > "/etc/apparmor.d/lxc/lxc-default-with-mounting" :
> >
> >
> >
> > # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers,
> > which
> > # will source all profiles under /etc/apparmor.d/lxc
> >
> > profile lxc-container-default-with-mounting
> > flags=(attach_disconnected,mediate_deleted) {
> >   #include 
> >
> > # allow standard blockdevtypes.
> > # The concern here is in-kernel superblock parsers bringing down the
> > # host with bad data.  However, we continue to disallow proc, sys,
> > securityfs,
> > # etc to nonstandard locations.
> >   mount fstype=ext*,
> >   mount fstype=xfs,
> >   mount fstype=btrfs,
> > }
> >
> >
> > I tried to add "mount fstype=nfs,", then restart my lxd service and my
> > container, but that did not changed anything.
> >
> > In fact, I am not able to mount any nfs shared:
> >
> >
> > mount -t nfs nas-0-2:/export/bio /tmp/bio
> > mount.nfs: access denied by server while mounting nas-0-2:/export/bio
> >
> >
> > Although nas-0-2 allows mounts from my client IP.
> >
> >
> > :(
> >
> >
> >
> >
> > 2016-07-01 21:57 GMT+02:00 Tycho Andersen <tycho.ander...@canonical.com>:
> >
> >> On Fri, Jul 01, 2016 at 04:15:57PM +0200, Rémy Dernat wrote:
> >> > Hi,
> >> >
> >> > I tried to install basically autofs in the container and mount
> >> directories
> >> > with automount, but as a newbie, everything failed ;)
> >> >
> >> > automount -f --debug
> >> > automount: test mount forbidden or incorrect kernel protocol version,
> >> > kernel protocol version 5.00 or above required.
> >> >
> >> > I know that in OpenVZ, you need to mount the filesystem on the host and
> >> > then use simfs on the container through a container "mount" file.
> >> > Then, I saw problems with LXC here:
> >> > http://comments.gmane.org/gmane.linux.kernel.containers.lxc.general/894
> >> > And after reading https://github.com/lxc/lxd/issues/714 , I tried:
> >> >
> >> > lxc config device add my-container autofs unix-char path=/dev/autofs
> >> >
> >> > Now on container side:
> >> > #ls

Re: [lxc-users] [lxd] autofs

2016-07-01 Thread Tycho Andersen
On Fri, Jul 01, 2016 at 04:15:57PM +0200, Rémy Dernat wrote:
> Hi,
> 
> I tried to install basically autofs in the container and mount directories
> with automount, but as a newbie, everything failed ;)
> 
> automount -f --debug
> automount: test mount forbidden or incorrect kernel protocol version,
> kernel protocol version 5.00 or above required.
> 
> I know that in OpenVZ, you need to mount the filesystem on the host and
> then use simfs on the container through a container "mount" file.
> Then, I saw problems with LXC here:
> http://comments.gmane.org/gmane.linux.kernel.containers.lxc.general/894
> And after reading https://github.com/lxc/lxd/issues/714 , I tried:
> 
> lxc config device add my-container autofs unix-char path=/dev/autofs
> 
> Now on container side:
> #ls -l /dev/autofs
> crw-rw 1 root root 10, 235 Jul  1 14:06 /dev/autofs
> 
> 
> However, the issue is still here:
> automount -f --debug
> automount: test mount forbidden or incorrect kernel protocol version,
> kernel protocol version 5.00 or above required.
> 
> "autofs4" module is loaded in the kernel.
> 
> I tried to remove/purge autofs and switch to autofs5 package and I have
> also the same error.

Is the container privileged? Are you in an apparmor mode which allows
mounts? I don't think unprivileged mounting of autofs is allowed, and
our apparmor profiles by default disallow most kinds of mounts.

> The container, like the host are ubuntu16.04.
> 
> Any help would be useful !
> 
> Best regards,
> Remy

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Live migration mkdtemp failure

2016-06-23 Thread Tycho Andersen
On Wed, Jun 22, 2016 at 11:09:43AM -0700, jjs - mainphrame wrote:
> Hi Tycho.
> 
> It's been on a to-do list to file a bug for this limit, but I hadn't gotten
> around to it.
> 
> You can see the size indications in the messages below -

Thanks, I've filed,

https://bugs.launchpad.net/ubuntu/+source/criu/+bug/1595596

I'll make sure whatever default size we do set, it'll be larger than
that :)

Tycho

> root@olympia:~# lxc list
> ++-+---+--++---+
> |  NAME  |  STATE  | IPV4  | IPV6 |TYPE| SNAPSHOTS |
> ++-+---+--++---+
> | akita  | RUNNING | 192.168.111.22 (eth0) |  | PERSISTENT | 0 |
> ++-+---+--++---+
> | kangal | RUNNING | 192.168.111.44 (eth0) |  | PERSISTENT | 0 |
> ++-+---+--++---+
> root@olympia:~# lxc move akita lxd1:
> error: Error transferring container data: checkpoint failed:
> (02.064728) Error (files-reg.c:683): Can't dump ghost file
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 of 1566440 size, increase
> limit
> (02.064730) Error (cr-dump.c:1356): Dump mappings (pid: 4685) failed with -1
> (02.068126) Error (cr-dump.c:1600): Dumping FAILED.
> root@olympia:~# lxc move kangal lxd1:
> error: Error transferring container data: checkpoint failed:
> (14.495544) Error (files-reg.c:683): Can't dump ghost file
> /usr/lib/x86_64-linux-gnu/libxml2.so.2.9.1 of 1465592 size, increase limit
> (14.495547) Error (cr-dump.c:1356): Dump mappings (pid: 11956) failed with
> -1
> (14.500840) Error (cr-dump.c:1600): Dumping FAILED.
> root@olympia:~#
> 
> Regards,
> 
> Jake
> 
> 
> On Wed, Jun 22, 2016 at 8:04 AM, Tycho Andersen <
> tycho.ander...@canonical.com> wrote:
> 
> > On Tue, Jun 21, 2016 at 09:27:21AM -0700, jjs - mainphrame wrote:
> > > That particular error was resolved, but the lxc live migration doesn't
> > work
> > > for a different reason now. We now get an error that says "can't dump
> > ghost
> > > file" because of apparent size limitations - a limit less than the size
> > of
> > > any lxc container we have running here.
> >
> > How big is the ghost file that you're running into and what
> > application is it from? Perhaps we should just increase the default
> > limit.
> >
> > Tycho
> >
> > > (In contrast, live migration on all of our Openvz 7 containers works
> > > reliably)
> > >
> > > Jake
> > >
> > >
> > >
> > >
> > > On Tue, Jun 21, 2016 at 4:19 AM, McDonagh, Ed <ed.mcdon...@rmh.nhs.uk>
> > > wrote:
> > >
> > > >
> > > >
> > > > > On Tue, Mar 29, 2016 at 09:30:19AM -0700, jjs - mainphrame wrote:
> > > > > > On Tue, Mar 29, 2016 at 7:18 AM, Tycho Andersen <
> > > > > > tycho.andersen at canonical.com> wrote:
> > > > > >
> > > > > > > On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> > > > > >>  > I've looked at ct migration between 2 ubuntu 16.04 hosts today,
> > > > and had
> > > > > > > > some interesting problems;  I find that migration of stopped
> > > > containers
> > > > > > > > works fairly reliably; but live migration, well, it transfers a
> > > > lot of
> > > > > > > > data, then exits with a failure message. I can then move the
> > same
> > > > > > > > container, stopped, with no problem.
> > > > > > > >
> > > > > > > > The error is the same every time, a failure of "mkdtemp" -
> > > > > > >
> > > > > > > It looks like your host /tmp isn't writable by the uid map that
> > the
> > > > > > > container is being restored as?
> > > > > > >
> > > > > >
> > > > > > Which is odd, since /tmp has 1777 perms on both hosts, so I don't
> > see
> > > > how
> > > > > > it could be a permissions problem. Surely the default apparmor
> > profile
> > > > is
> > > > > > not the cause? You did give me a new idea though, and I'll set up a
> > > > test
> > > > > > with privileged containers for comparison. Is there a switch to
> > enable
> > > > > > verbose logging?
> >

Re: [lxc-users] Live migration mkdtemp failure

2016-06-22 Thread Tycho Andersen
On Tue, Jun 21, 2016 at 09:27:21AM -0700, jjs - mainphrame wrote:
> That particular error was resolved, but the lxc live migration doesn't work
> for a different reason now. We now get an error that says "can't dump ghost
> file" because of apparent size limitations - a limit less than the size of
> any lxc container we have running here.

How big is the ghost file that you're running into and what
application is it from? Perhaps we should just increase the default
limit.

Tycho

> (In contrast, live migration on all of our Openvz 7 containers works
> reliably)
> 
> Jake
> 
> 
> 
> 
> On Tue, Jun 21, 2016 at 4:19 AM, McDonagh, Ed <ed.mcdon...@rmh.nhs.uk>
> wrote:
> 
> >
> >
> > > On Tue, Mar 29, 2016 at 09:30:19AM -0700, jjs - mainphrame wrote:
> > > > On Tue, Mar 29, 2016 at 7:18 AM, Tycho Andersen <
> > > > tycho.andersen at canonical.com> wrote:
> > > >
> > > > > On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> > > >>  > I've looked at ct migration between 2 ubuntu 16.04 hosts today,
> > and had
> > > > > > some interesting problems;  I find that migration of stopped
> > containers
> > > > > > works fairly reliably; but live migration, well, it transfers a
> > lot of
> > > > > > data, then exits with a failure message. I can then move the same
> > > > > > container, stopped, with no problem.
> > > > > >
> > > > > > The error is the same every time, a failure of "mkdtemp" -
> > > > >
> > > > > It looks like your host /tmp isn't writable by the uid map that the
> > > > > container is being restored as?
> > > > >
> > > >
> > > > Which is odd, since /tmp has 1777 perms on both hosts, so I don't see
> > how
> > > > it could be a permissions problem. Surely the default apparmor profile
> > is
> > > > not the cause? You did give me a new idea though, and I'll set up a
> > test
> > > > with privileged containers for comparison. Is there a switch to enable
> > > > verbose logging?
> > >
> > > It already is enabled, you can find the full logs in
> > > /var/log/lxd/$container/migration_*
> > >
> > > Perhaps the pwd of the CRIU task is what's broken instead, since CRIU
> > > isn't supplying a full mkdtemp template. I'll have a deeper look in a
> > > bit.
> > >
> > > Tycho
> > >
> > > >
> > > > > >
> > > > > > root at ronnie:~# lxc move third lxd:
> > > > > > error: Error transferring container data: restore failed:
> > > > > > (00.033172)  1: Error (cr-restore.c:1489): mkdtemp failed
> > > > > > crtools-proc.x9p5OH: Permission denied
> > > > > > (00.060072) Error (cr-restore.c:1352): 9188 killed by signal 9
> > > > > > (00.117126) Error (cr-restore.c:2182): Restoring FAILED.
> >
> > I've been getting the same error - was the issue ever resolved for
> > non-privileged containers?
> >
> > Kind regards
> > Ed
> > #
> > Attention:
> > This e-mail and any attachment is for authorised use by the intended
> > recipient(s) only. It may contain proprietary, confidential and/or
> > privileged information and should not be copied, disclosed, distributed,
> > retained or used by any other party. If you are not an intended recipient
> > please notify the sender immediately and delete this e-mail (including
> > attachments and copies).
> >
> > The statements and opinions expressed in this e-mail are those of the
> > author and do not necessarily reflect those of the Royal Marsden NHS
> > Foundation Trust. The Trust does not take any responsibility for the
> > statements and opinions of the author.
> >
> > Website: http://www.royalmarsden.nhs.uk
> > #
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] About checkpoint and restore

2016-06-15 Thread Tycho Andersen
On Wed, Jun 15, 2016 at 09:39:58AM -0400, Xinyang Ge wrote:
> On Tue, Jun 14, 2016 at 11:05 PM, Fajar A. Nugraha  wrote:
> 
> > If you're okay with "the containers contain identical software, and
> > fine with it being in just-booted state", then you might need to look
> > at "lxc copy"  (which can create multiple copies of the container from
> > the same source, each with its own unique MAC address) instead of "lxc
> > snapshot".
> 
> Not only identical software, but also identical process runtime state
> when I take the snapshot.
> 
> Does "lxc copy" maintain the process running state?

No, you'll need to use `lxc snapshot` for that; if the container is
running, the process state will be snapshotted automatically.

Tycho

> Thanks!
> 
> Xinyang
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] About checkpoint and restore

2016-06-14 Thread Tycho Andersen
On Tue, Jun 14, 2016 at 06:32:13PM -0400, Xinyang Ge wrote:
> Dear LXC users,
> 
> I am new to the linux container technology.  I am writing to ask if
> LXC supports checkpointing a whole container and restore it later or
> even on different machines?

Yes, it does either via lxc-checkpoint or `lxc snapshot` depending on
whether you want to use the lxc-* tools or LXD.

Tycho

> Thanks,
> Xinyang
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc move fails

2016-06-01 Thread Tycho Andersen
On Wed, Jun 01, 2016 at 02:13:55PM -0700, jjs - mainphrame wrote:
> I've been hoping for a return of live migration functionality as well.
> I tried after today's lxc upgrades, but no joy:
> 
> root@olympia:~# lxc move kangal lxd1:
> error: Error transferring container data: checkpoint failed:
> (00.614250) Error (files-reg.c:683): Can't dump ghost file
> /lib/x86_64-linux-gnu/libc-2.19.so of 1840928 size, increase limit
> (00.614252) Error (cr-dump.c:1356): Dump mappings (pid: 3572) failed with -1
> (00.617439) Error (cr-dump.c:1600): Dumping FAILED.
> root@olympia:~#
> 
> Both hosts are ubuntu 16.04, up to date as of this message.
> 
> Shall I file a bug report?

Sure, we should probably expose some way to increase the ghost file
size limit. I'm not sure if we can make it unlimited, though, so there
will always be some problem like this. Of course, 640k should be
enough for anyone and all that :)

But yes, please file a bug and we can at least make it configurable.

Tycho

> Jake
> 
> 
> On Wed, Jun 1, 2016 at 7:24 AM, Stéphane Graber  wrote:
> > On Wed, Jun 01, 2016 at 11:04:58PM +0900, Tomasz Chmielewski wrote:
> >> Trying to move a running container between two Ubuntu 16.04 servers with 
> >> the
> >> latest updates installed:
> >>
> >> # lxc move local-container remote:
> >> error: Error transferring container data: checkpoint failed:
> >> (00.316028) Error (pie-util-vdso.c:155): vdso: Not all dynamic entries are
> >> present
> >> (00.316211) Error (cr-dump.c:1600): Dumping FAILED.
> >>
> >>
> >> Expected?
> >
> > I don't believe I've seen that one before. Maybe Tycho has.
> >
> > Live migration is considered an experimental feature of LXD right now,
> > mostly because CRIU still needs quite a bit of work to serialize all
> > useful kernel structures.
> >
> > You may want to follow the "Sending bug reports" section of my post here
> > https://www.stgraber.org/2016/04/25/lxd-2-0-live-migration-912/
> >
> > That way we should have all the needed data to look into this.
> >
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc move fails

2016-06-01 Thread Tycho Andersen
On Thu, Jun 02, 2016 at 12:12:04AM +0900, Tomasz Chmielewski wrote:
> On 2016-06-01 23:24, Stéphane Graber wrote:
> 
> >>Expected?
> >
> >I don't believe I've seen that one before. Maybe Tycho has.
> >
> >Live migration is considered an experimental feature of LXD right now,
> >mostly because CRIU still needs quite a bit of work to serialize all
> >useful kernel structures.
> >
> >You may want to follow the "Sending bug reports" section of my post here
> >https://www.stgraber.org/2016/04/25/lxd-2-0-live-migration-912/
> >
> >That way we should have all the needed data to look into this.
> 
> FYI I'm using Linux 4.6.0 (from
> http://kernel.ubuntu.com/~kernel-ppa/mainline/) on both servers, as all
> earlier kernels are not stable with btrfs.
> 
> Could be it's not compatible?
> 
> Do you still want me to report the issue?

Yes, an issue report would be appreciated. Can you include the full
log? Looks like your container is using some program with some sort of
funny ELF headers that criu doesn't quite understand.

Thanks,

Tycho

> 
> Tomasz Chmielewski
> http://wpkg.org
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] New live migration issues

2016-05-16 Thread Tycho Andersen
On Mon, May 16, 2016 at 08:52:34AM +, Jamie Brown wrote:
> Hello again,
> 
> I’m now running LXD 2.0.0 on Ubuntu 16.04 LTS after having some amount of 
> success with a previous BETA on 14.04.
> 
> However, after a fresh install, the first attempt at a live migration has 
> failed;
> 
> lxc launch ubuntu:xenial host2:migrate
> lxc move migrate host3:
> error: Error transferring container data: restore failed:
> (00.499626)  1: Error (mount.c:2406): mnt: Can't mount at 
> ./sys/kernel/debug: Invalid argument
> (00.515123) Error (cr-restore.c:1350): 8965 exited, status=1
> (00.573498) Error (cr-restore.c:2182): Restoring FAILED.
> 
> The end of the migration log on host3;
> (00.499574)  1: mnt: 129:./sys/fs/cgroup/net_cls,net_prio private 1 
> shared 0 slave 0
> (00.499588)  1: mnt:   Mounting securityfs @./sys/kernel/security 
> (0)
> (00.499593)  1: mnt:   Bind /sys/kernel/security/ to 
> ./sys/kernel/security
> (00.499603)  1: mnt: 276:./sys/kernel/security private 0 shared 0 slave 1
> (00.499610)  1: mnt:   Mounting debugfs @./sys/kernel/debug (0)
> (00.499615)  1: mnt:   Bind /sys/kernel/debug/ to 
> ./sys/kernel/debug
> (00.499626)  1: Error (mount.c:2406): mnt: Can't mount at 
> ./sys/kernel/debug: Invalid argument
> (00.515123) Error (cr-restore.c:1350): 8965 exited, status=1
> (00.555143) Switching to new ns to clean ghosts
> (00.572964) uns: calling exit_usernsd (-1, 1)
> (00.573065) uns: daemon calls 0x4523c0 (8960, -1, 1)
> (00.573108) uns: `- daemon exits w/ 0
> (00.573458) uns: daemon stopped
> (00.573498) Error (cr-restore.c:2182): Restoring FAILED.
> 
> Both hosts have criu installed and are running the same kernel version 
> (4.4.0-22-generic). The CPU and hardware profile is absolutely identical.
> 
> Both hosts are using ZFS file storage.
> 
> Stop and copy between hosts works without issue. The criu dump also completes 
> successfully, so the issue appears to be with criu restore.
> 
> Any ideas?

In addition to the bug Stéphane linked to, I've also posted v3 of the
patchset to fix this (which I hope to be the final version) on the
CRIU mailing list:

https://lists.openvz.org/pipermail/criu/2016-May/028432.html

Tycho

> Thanks,
> 
> Jamie

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc progress and a few questions

2016-04-25 Thread Tycho Andersen
Hi,

On Mon, Apr 18, 2016 at 12:47:31PM -0700, jjs - mainphrame wrote:
> Greetings -
> 
> As the 16.04 release date draws near, there have been ongoing daily
> updates but on my (ext4 based) xenial systems, migration of any lxd
> containers, stopped or running, hangs indefinitely.
> 
> There is an strace output of an unsucessful attempt at
> http://pastebin.com/raw/ge9zNUaX
> 
> Short of the one suggestion to format /var/lib as zfs and see what
> happens, is there anything else I can try?

I've just filed https://github.com/lxc/lxd/issues/1944 and hope to
look at it this week; I've been quite busy with other issues here for
release.

I'm not aware of anything you can do right now other than switching to
ZFS (which is really worth doing, ZFS is sweet).

Tycho

> Regards,
> 
> Jake
> 
> On Mon, Apr 11, 2016 at 8:25 AM, Tycho Andersen
> <tycho.ander...@canonical.com> wrote:
> > hi,
> >
> > On Fri, Apr 08, 2016 at 04:40:43PM -0700, jjs - mainphrame wrote:
> >> Ah, never mind - it doesn't appear to be solely a criu issue - even
> >> migration of stopped containers hangs forever now.
> >
> > Yeah, I think there is some behavior either with rsync or our
> > websocket libraries that's causing this hang. I've looked into it but
> > haven't had any luck .Try the zfs backend, that should work better.
> >
> > Also, there is a bug with xenial kernels -16+ that prevent CRIU from
> > working (you'll get an EBUSY trying to mount sysfs), so if you have
> > -15 or below that should work.
> >
> > Sorry for the delay, been dealing with a family emergency.
> >
> > Tycho
> >
> >> Jake
> >>
> >> On Fri, Apr 8, 2016 at 4:23 PM, jjs - mainphrame <j...@mainphrame.com> 
> >> wrote:
> >> > Ubuntu 16.04, up to date -
> >> >
> >> > After today's updates, including a kernel upgrade to 4.4.0-18, I tried
> >> > live migration again:
> >> >
> >> > root@raskolnikov:~# lxc move third lxd2:
> >> >
> >> > One hour later:
> >> >
> >> > root@raskolnikov:~# lxc move third lxd2:
> >> >
> >> > Still stuck, and the migration file in /var/log/lxd/third has not been 
> >> > created.
> >> >
> >> > Tycho said on Mar 30 that the situation should be sorted soon, but
> >> > mentioned the git repo:
> >> > https://github.com/tych0/criu/tree/cgroup-root-mount
> >> >
> >> > Should live migration work with criu from git?
> >> >
> >> > Feel free to advise me on what information I can supply, not only for
> >> > the ct migration issues, but also for the new dhcp issue
> >> >
> >> > Regards,
> >> >
> >> > Jake
> >> >
> >> >
> >> > On Thu, Apr 7, 2016 at 11:01 PM, jjs - mainphrame <j...@mainphrame.com> 
> >> > wrote:
> >> >> (Bump) -
> >> >>
> >> >> Any thoughts on what to try for the CT migration and dhcp issues?
> >> >> Running up to date ubuntu 16.04 beta -
> >> >>
> >> >> Regards,
> >> >>
> >> >> Jake
> >> >>
> >> >> On Wed, Apr 6, 2016 at 3:18 PM, jjs - mainphrame <j...@mainphrame.com> 
> >> >> wrote:
> >> >>> Greetings -
> >> >>>
> >> >>> I'be not yet been able to reproduce that one shining moment from Mar
> >> >>> 29 when live migration of privileged containers was working, under
> >> >>> kernel 4.4.0-15
> >> >>>
> >> >>> To recap. live container migration broke with 4.4.0-16, and is still
> >> >>> broken in 4.4.0-17 - but  now, instead of producing an error message,
> >> >>> an attempt to live migrate a container merely hangs forever. Is that
> >> >>> expected, or should I be seeing something more? BTW - the migration
> >> >>> dump log for that container hasn't been touched for a week. I'll be
> >> >>> glad to supply more info if this is not a known issue.
> >> >>>
> >> >>> Recent updates seem to have created a new problem. the CTs which
> >> >>> configure their own network settings work (aside from migration) but
> >> >>> none of the CTs which depend on dhcp are getting IPs. BTW I'm using a
> >> >>> bridge connected to my local network and dhcp, not the default lxc
> >> >>> dhcp server. I see the pa

Re: [lxc-users] lxc progress and a few questions

2016-04-11 Thread Tycho Andersen
hi,

On Fri, Apr 08, 2016 at 04:40:43PM -0700, jjs - mainphrame wrote:
> Ah, never mind - it doesn't appear to be solely a criu issue - even
> migration of stopped containers hangs forever now.

Yeah, I think there is some behavior either with rsync or our
websocket libraries that's causing this hang. I've looked into it but
haven't had any luck .Try the zfs backend, that should work better.

Also, there is a bug with xenial kernels -16+ that prevent CRIU from
working (you'll get an EBUSY trying to mount sysfs), so if you have
-15 or below that should work.

Sorry for the delay, been dealing with a family emergency.

Tycho

> Jake
> 
> On Fri, Apr 8, 2016 at 4:23 PM, jjs - mainphrame <j...@mainphrame.com> wrote:
> > Ubuntu 16.04, up to date -
> >
> > After today's updates, including a kernel upgrade to 4.4.0-18, I tried
> > live migration again:
> >
> > root@raskolnikov:~# lxc move third lxd2:
> >
> > One hour later:
> >
> > root@raskolnikov:~# lxc move third lxd2:
> >
> > Still stuck, and the migration file in /var/log/lxd/third has not been 
> > created.
> >
> > Tycho said on Mar 30 that the situation should be sorted soon, but
> > mentioned the git repo:
> > https://github.com/tych0/criu/tree/cgroup-root-mount
> >
> > Should live migration work with criu from git?
> >
> > Feel free to advise me on what information I can supply, not only for
> > the ct migration issues, but also for the new dhcp issue
> >
> > Regards,
> >
> > Jake
> >
> >
> > On Thu, Apr 7, 2016 at 11:01 PM, jjs - mainphrame <j...@mainphrame.com> 
> > wrote:
> >> (Bump) -
> >>
> >> Any thoughts on what to try for the CT migration and dhcp issues?
> >> Running up to date ubuntu 16.04 beta -
> >>
> >> Regards,
> >>
> >> Jake
> >>
> >> On Wed, Apr 6, 2016 at 3:18 PM, jjs - mainphrame <j...@mainphrame.com> 
> >> wrote:
> >>> Greetings -
> >>>
> >>> I'be not yet been able to reproduce that one shining moment from Mar
> >>> 29 when live migration of privileged containers was working, under
> >>> kernel 4.4.0-15
> >>>
> >>> To recap. live container migration broke with 4.4.0-16, and is still
> >>> broken in 4.4.0-17 - but  now, instead of producing an error message,
> >>> an attempt to live migrate a container merely hangs forever. Is that
> >>> expected, or should I be seeing something more? BTW - the migration
> >>> dump log for that container hasn't been touched for a week. I'll be
> >>> glad to supply more info if this is not a known issue.
> >>>
> >>> Recent updates seem to have created a new problem. the CTs which
> >>> configure their own network settings work (aside from migration) but
> >>> none of the CTs which depend on dhcp are getting IPs. BTW I'm using a
> >>> bridge connected to my local network and dhcp, not the default lxc
> >>> dhcp server. I see the packets on the host bridge, but they don't
> >>> reach the dhcp server. I'd be curious to know if there have been any
> >>> dhcp issues since recent updates. If not, I'll need to troubleshoot
> >>> other causes, but it's odd that dhcp simply stops working for all CTs
> >>> on both lxd hosts after updates.
> >>>
> >>> Jake
> >>>
> >>>
> >>> On Wed, Mar 30, 2016 at 6:27 AM, Tycho Andersen
> >>> <tycho.ander...@canonical.com> wrote:
> >>>> On Tue, Mar 29, 2016 at 11:17:26PM -0700, jjs - mainphrame wrote:
> >>>>> Well, I've found some interesting things here today. I created a couple 
> >>>>> of
> >>>>> privileged xenial containers, and sure enough, I was able to live 
> >>>>> migrate
> >>>>> them back and forth between the 2 lxd hosts.
> >>>>>
> >>>>> So far, so good.
> >>>>>
> >>>>> Then I did an apt upgrade - among the changes was a kernel change from
> >>>>>  4.4.0-15 to 4.4.0-16 - and live migration stopped working.
> >>>>>
> >>>>> Here are the failure messages that resulted from attempting the very 
> >>>>> same
> >>>>> live migrations that worked before the upgrade and reboot into 4.4.0-16:
> >>>>>
> >>>>> root@raskolnikov:~# lxc move akira lxd2:
> >>>>> error: Error transferring container data: checkpoint failed:
> &

Re: [lxc-users] LXD 2 rc5: Cannot download images

2016-04-04 Thread Tycho Andersen
On Fri, Mar 25, 2016 at 02:16:21PM +, Genevski, Pavel wrote:
> Hi,
> 
> Using rc5 I am finding it impossible to launch any images (I am behind a 
> proxy). It used to work fine in v0.8.
> 
> root@XXX:~# lxc image list images:
> error: Get https://images.linuxcontainers.org:8443/1.0/images?recursion=1: 
> x509: certificate signed by unknown authority
> 
> Tried exporting the certificate from Firefox, copying it into 
> /usr/local/share/ca-certificates  and running update-ca-certificates (said 1 
> was added). It did not help.
> 
> For the Ubuntu images repo I am getting a different error
> root@XXX:~# lxc image copy ubuntu:14.04 local:
> error: Get https://cloud-images.ubuntu.com/releases/streams/v1/index.json: 
> Unable to connect to: cloud-images.ubuntu.com:443
> 
> Both of these URL work fine with wget, which is configured to work with our 
> proxy (.wgetrc has use_proxy=yes and http_proxy=our proxy). Also http_proxy 
> and https_proxy env variables are properly configured.
> 
> Is there anything else we poor users-behind-proxy can do about this?

Do you have an old ~/.config/lxc/servercerts/images.crt present? If
you remove that does it work?

Tycho

> Best,
> Pavel

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc progress and a few questions

2016-03-30 Thread Tycho Andersen
On Tue, Mar 29, 2016 at 11:17:26PM -0700, jjs - mainphrame wrote:
> Well, I've found some interesting things here today. I created a couple of
> privileged xenial containers, and sure enough, I was able to live migrate
> them back and forth between the 2 lxd hosts.
> 
> So far, so good.
> 
> Then I did an apt upgrade - among the changes was a kernel change from
>  4.4.0-15 to 4.4.0-16 - and live migration stopped working.
> 
> Here are the failure messages that resulted from attempting the very same
> live migrations that worked before the upgrade and reboot into 4.4.0-16:
> 
> root@raskolnikov:~# lxc move akira lxd2:
> error: Error transferring container data: checkpoint failed:
> (00.092234) Error (mount.c:740): mnt: 83:./sys/fs/cgroup/devices doesn't
> have a proper root mount
> (00.098187) Error (cr-dump.c:1600): Dumping FAILED.
> 
> 
> root@ronnie:~# lxc move third lxd:
> error: Error transferring container data: checkpoint failed:
> (00.076107) Error (mount.c:740): mnt: 326:./sys/fs/cgroup/perf_event
> doesn't have a proper root mount
> (00.080388) Error (cr-dump.c:1600): Dumping FAILED.

Yep, this is a known issue with -16. We need both a kernel patch and a
patch to CRIU before it will start working again. I have a branch at:

https://github.com/tych0/criu/tree/cgroup-root-mount

which should work if you want to keep playing with it, but hopefully
we'll have the situation sorted out in the next few days.

Tycho

> Jake
> 
> PS - Thanks for the html mail heads-up - I've been using google mail
> services for this domain. I'll have to look into the config options, and
> see if I can do the needful.

> 
> On Tue, Mar 29, 2016 at 12:45 PM, Andrey Repin  wrote:
> 
> > Greetings, jjs - mainphrame!
> >
> > >> On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> >  >>> I've looked at ct migration between 2 ubuntu 16.04 hosts today, and
> > had
> >  >>> some interesting problems;  I find that migration of stopped
> > containers
> >  >>> works fairly reliably; but live migration, well, it transfers a lot of
> >  >>> data, then exits with a failure message. I can then move the same
> >  >>> container, stopped, with no problem.
> >  >>>
> >  >>> The error is the same every time, a failure of "mkdtemp" -
> > >>
> > >>  It looks like your host /tmp isn't writable by the uid map that the
> > >>  container is being restored as?
> >
> >
> > > Which is odd, since /tmp has 1777 perms on both hosts, so I don't see how
> > > it could be a permissions problem. Surely the default apparmor profile is
> > > not the cause? You did give me a new idea though, and I'll set up a test
> > > with privileged containers for comparison. Is there a switch to enable
> > verbose logging?
> >
> > I've ran into the same issue once. Stumbled upon it for nearly a month,
> > falsely
> > blaming LXC.
> > Recreating a container's rootfs from scratch resolved the issue.
> > I know not of what caused it to begin with, must've been some kind of
> > glitch.
> >
> > P.S.
> > It would be great if you can configure your mail client to not use HTML
> > format
> > for lists.
> >
> >
> > --
> > With best regards,
> > Andrey Repin
> > Tuesday, March 29, 2016 22:43:04
> >
> > Sorry for my terrible english...
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc progress and a few questions

2016-03-29 Thread Tycho Andersen
On Tue, Mar 29, 2016 at 09:30:19AM -0700, jjs - mainphrame wrote:
> On Tue, Mar 29, 2016 at 7:18 AM, Tycho Andersen <
> tycho.ander...@canonical.com> wrote:
> 
> > On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> > > I've looked at ct migration between 2 ubuntu 16.04 hosts today, and had
> > > some interesting problems;  I find that migration of stopped containers
> > > works fairly reliably; but live migration, well, it transfers a lot of
> > > data, then exits with a failure message. I can then move the same
> > > container, stopped, with no problem.
> > >
> > > The error is the same every time, a failure of "mkdtemp" -
> >
> > It looks like your host /tmp isn't writable by the uid map that the
> > container is being restored as?
> >
> 
> Which is odd, since /tmp has 1777 perms on both hosts, so I don't see how
> it could be a permissions problem. Surely the default apparmor profile is
> not the cause? You did give me a new idea though, and I'll set up a test
> with privileged containers for comparison. Is there a switch to enable
> verbose logging?

It already is enabled, you can find the full logs in
/var/log/lxd/$container/migration_*

Perhaps the pwd of the CRIU task is what's broken instead, since CRIU
isn't supplying a full mkdtemp template. I'll have a deeper look in a
bit.

Tycho

> 
> > >
> > > root@ronnie:~# lxc move third lxd:
> > > error: Error transferring container data: restore failed:
> > > (00.033172)  1: Error (cr-restore.c:1489): mkdtemp failed
> > > crtools-proc.x9p5OH: Permission denied
> > > (00.060072) Error (cr-restore.c:1352): 9188 killed by signal 9
> > > (00.117126) Error (cr-restore.c:2182): Restoring FAILED.
> > >
> > > root@raskolnikov:~# lxc move third lxd2:
> > > error: Error transferring container data: restore failed:
> > > (00.039099)  1: Error (cr-restore.c:1489): mkdtemp failed
> > > crtools-proc.a3U2t5: Permission denied
> > > (00.063015) Error (cr-restore.c:1352): 1771 killed by signal 9
> > > (00.115261) Error (cr-restore.c:2182): Restoring FAILED.
> > >
> > > root@ronnie:~# lxc move third lxd:
> > > error: Error transferring container data: restore failed:
> > > (00.034542)  1: Error (cr-restore.c:1489): mkdtemp failed
> > > crtools-proc.gee5YS: Permission denied
> > > (00.059955) Error (cr-restore.c:1352): 9274 killed by signal 9
> > > (00.109272) Error (cr-restore.c:2182): Restoring FAILED.
> > >
> > >
> > > Thanks for any pointers you can provide -
> > >
> > > PS  - on a side note, I'd originally set up the new box with a separate
> > > /var partition on btrfs. As a result, "criu check" would die instantly
> > with
> > > a segmentation error. After putting /var back on / (ext4) criu check
> > > produces the message "Looks good."
> >
> > Hmm. Can you get the backtrace?
> >
> 
> Tycho
> >
> 
> Sorry to say, I moved on quickly and the btrfs filesystem is no more, but I
> can recreate that scenario again later.
> 
> 
> Regards,
> 
> Jake
> 
> 
> 
> >
> > > Jake
> > >
> > >
> > >
> > > On Mon, Mar 28, 2016 at 7:47 AM, Tycho Andersen <
> > > tycho.ander...@canonical.com> wrote:
> > >
> > > > On Sun, Mar 27, 2016 at 09:22:44PM -0700, jjs - mainphrame wrote:
> > > > > You've had some success with live migrations? At any rate, I'm
> > upgrading
> > > > my
> > > > > ubuntu 15.10 test box to 16.04 so that I'll have 2 of them. I'll
> > give it
> > > > a
> > > > > whirl.
> > > >
> > > > Yes, there are still various unsupported kernel features, but it
> > > > should work out of the box for a modern stock xenial image on a xenial
> > > > host. There is one more xenial kernel patch which will eventually be
> > > > released that will break things, but I'm in the process of upstreaming
> > > > some CRIU patches to handle that case, and we'll distro patch those
> > > > when they're ready.
> > > >
> > > > Tycho
> > > >
> > > > > On Sun, Mar 27, 2016 at 9:20 PM, Fajar A. Nugraha <l...@fajar.net>
> > > > wrote:
> > > > >
> > > > > > On Sun, Mar 27, 2016 at 11:31 PM, jjs - mainphrame <
> > j...@mainphrame.com
> > > > >
> > > > > > wrote:
> > > > > > > The 2nd lin

Re: [lxc-users] lxc progress and a few questions

2016-03-29 Thread Tycho Andersen
On Mon, Mar 28, 2016 at 08:47:24PM -0700, jjs - mainphrame wrote:
> I've looked at ct migration between 2 ubuntu 16.04 hosts today, and had
> some interesting problems;  I find that migration of stopped containers
> works fairly reliably; but live migration, well, it transfers a lot of
> data, then exits with a failure message. I can then move the same
> container, stopped, with no problem.
> 
> The error is the same every time, a failure of "mkdtemp" -

It looks like your host /tmp isn't writable by the uid map that the
container is being restored as?

> 
> root@ronnie:~# lxc move third lxd:
> error: Error transferring container data: restore failed:
> (00.033172)  1: Error (cr-restore.c:1489): mkdtemp failed
> crtools-proc.x9p5OH: Permission denied
> (00.060072) Error (cr-restore.c:1352): 9188 killed by signal 9
> (00.117126) Error (cr-restore.c:2182): Restoring FAILED.
> 
> root@raskolnikov:~# lxc move third lxd2:
> error: Error transferring container data: restore failed:
> (00.039099)  1: Error (cr-restore.c:1489): mkdtemp failed
> crtools-proc.a3U2t5: Permission denied
> (00.063015) Error (cr-restore.c:1352): 1771 killed by signal 9
> (00.115261) Error (cr-restore.c:2182): Restoring FAILED.
> 
> root@ronnie:~# lxc move third lxd:
> error: Error transferring container data: restore failed:
> (00.034542)  1: Error (cr-restore.c:1489): mkdtemp failed
> crtools-proc.gee5YS: Permission denied
> (00.059955) Error (cr-restore.c:1352): 9274 killed by signal 9
> (00.109272) Error (cr-restore.c:2182): Restoring FAILED.
> 
> 
> Thanks for any pointers you can provide -
> 
> PS  - on a side note, I'd originally set up the new box with a separate
> /var partition on btrfs. As a result, "criu check" would die instantly with
> a segmentation error. After putting /var back on / (ext4) criu check
> produces the message "Looks good."

Hmm. Can you get the backtrace?

Tycho

> Jake
> 
> 
> 
> On Mon, Mar 28, 2016 at 7:47 AM, Tycho Andersen <
> tycho.ander...@canonical.com> wrote:
> 
> > On Sun, Mar 27, 2016 at 09:22:44PM -0700, jjs - mainphrame wrote:
> > > You've had some success with live migrations? At any rate, I'm upgrading
> > my
> > > ubuntu 15.10 test box to 16.04 so that I'll have 2 of them. I'll give it
> > a
> > > whirl.
> >
> > Yes, there are still various unsupported kernel features, but it
> > should work out of the box for a modern stock xenial image on a xenial
> > host. There is one more xenial kernel patch which will eventually be
> > released that will break things, but I'm in the process of upstreaming
> > some CRIU patches to handle that case, and we'll distro patch those
> > when they're ready.
> >
> > Tycho
> >
> > > On Sun, Mar 27, 2016 at 9:20 PM, Fajar A. Nugraha <l...@fajar.net>
> > wrote:
> > >
> > > > On Sun, Mar 27, 2016 at 11:31 PM, jjs - mainphrame <j...@mainphrame.com
> > >
> > > > wrote:
> > > > > The 2nd link you sent seems to indicate that
> > > > > live migration wants to work, but I haven't been able to find any
> > reports
> > > > > from normal users in the field who've actually succeeded with live
> > > > > migration. if I've missed something, please let me know.
> > > >
> > > > My best advice is to try yourself with latest 16.04 daily.
> > > >
> > > > I've had both success and failure with it.
> > > >
> > > > --
> > > > Fajar
> > > > ___
> > > > lxc-users mailing list
> > > > lxc-users@lists.linuxcontainers.org
> > > > http://lists.linuxcontainers.org/listinfo/lxc-users
> > > >
> >
> > > ___
> > > lxc-users mailing list
> > > lxc-users@lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Error getting images behind a proxy

2016-03-28 Thread Tycho Andersen
On Mon, Mar 28, 2016 at 06:09:29AM +, Lago Gonzalez, Diego wrote:
> 
> Same error setting https_proxy as well as http_proxy.
> 
> user@box ~ $ sudo -E lxc launch images:centos/6/amd64 my-centos
> Creating my-centos
> error: Get https://images.linuxcontainers.org/1.0/images/centos/6/amd64: 
> remote error: handshake failure
> 
> Note: now using lxc 2.0.0.rc6 after last update.

Note that the server is the one doing the image downloading, so that's
the thing that needs to have the right environment set.

In any case, there are now also server configuration keys to do this:

lxc config set core.proxy_https ...
lxc config set core.proxy_http ...
lxc config set core.proxy_ignore_hosts ...

Tycho

> --
> Diego Lago
> 
> De: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] En nombre 
> de Peter Roberts
> Enviado el: jueves, 24 de marzo de 2016 21:06
> Para: LXC users mailing-list
> Asunto: Re: [lxc-users] Error getting images behind a proxy
> 
> Have you tried setting https_proxy as well as http_proxy?
> 
> Peter
> 
> On 23 March 2016 at 15:34, Lago Gonzalez, Diego 
> > wrote:
> 
> Hello,
> 
> I've installed lxc/lxd trought its official PPA (ppa:ubuntu-lxc/lxd-stable) 
> and I have version 2.0.0.rc5 in a Ubuntu MATE 15.10 (amd64). When I try to 
> download an image (with command `sudo -E lxc launch images:centos/6/amd64 
> my-centos`) I always get the same error:
> 
> error: Get https://images.linuxcontainers.org/1.0/images/centos/6/amd64: 
> remote error: handshake failure
> 
> Full debug output of the command is:
> 
> user@host ~ $ sudo -E lxc launch --debug images:centos/6/amd64 my-centos
> DBUG[03-23|16:29:16] Raw response: 
> {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"storage.zfs_pool_name":"lxd-pool"},"environment":{"addresses":[],"architectures":["x86_64","i686"],"certificate":"-BEGIN
>  
> CERTIFICATE-\nMIIFwDCCA6igAwIBAgIRAKHLionIKuqLPMxSzZrE59kwDQYJKoZIhvcNAQELBQAw\nMzEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzETMBEGA1UEAwwKcm9vdEBk\ndmJveDAeFw0xNjAzMjExMTUwNTBaFw0yNjAzMTkxMTUwNTBaMDMxHDAaBgNVBAoT\nE2xpbnV4Y29udGFpbmVycy5vcmcxEzARBgNVBAMMCnJvb3RAZHZib3gwggIiMA0G\nCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCnuiQIbMmbQWyLgaokDlcdDX/hzoNJ\nu6xSKlHskMhjoJDgzJZ+s2ooFMlOjinVXQMiTb4thY41z3BaL1JhD83E+YGlMH5c\n1NCzeJT0Zhqrb+kyDKOdLueC3sekLhILdoXBj+t4feyUs5yo3kWqkzKG5ejkpPVo\nhMG9/knXdnz9I2lNk6DuYzxG3OCvGN+8+f7HAJC43lGtsQoee3vUfNr9To/K1CxZ\nFkDBJUyiFcIjIMmPud8O4EAlxaR1hOXSr11Z19b0IE6qYtoKnBGZ2t+Vu76X+s//\nTC5XyVcLnXQCMbbU7GfTmeeNMzQVYpokZclPUO7w4GSHotqv8sUatj+O061KBtCV\nV/tweqrDLvMlkOd40BgKnn8lEuwoxHtSeHquVSYDSXmbHk0xT+X/Jo2bWIzg6jls\nw2s1vS8B71kz78to7GjhcJ4brESjxrClhMZg99O4WO4Bj7mkarvAQwh4CindI0UY\n1TBg0IK6bFm4wm0YhaheJ+2mPn/1PinLu6UrNHD72J9I8O+c92ISK8aC209AWmcH\nuUjHtMdLWMiU/dGcMiiRJSzYIkjNmWKB0VfV9CJFYeAUo7bZUXuxoj28Iw+/JKyc\nDlu0SSfpleHKNaU1JLIsXe/F2cyraxRCQSgzOLsyyJ0oNM+YFvBM6NoiRfcuRzdN\ngttRhnAqMRrcywIDAQABo4HOMIHLMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAK\nBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIGVBgNVHREEgY0wgYqCBWR2Ym94ggwx\nMC4wLjIuMTUvMjSCG2ZlODA6OmEwMDoyN2ZmOmZlZDY6NjI2OC82NIIOMTcyLjE3\nLjQyLjEvMTaCHGZlODA6OjQwYTE6NDlmZjpmZWJhOjhiODYvNjSCCzEwLjAuMy4x\nLzI0ghtmZTgwOjoxNGRkOjdmZjpmZTUwOjE5ZTkvNjQwDQYJKoZIhvcNAQELBQAD\nggIBAE7u9y1fkHPNluuaO4IZNGvl0NkOUpsVVQPL2L+AtAjukdb2c2DYsiOLir+x\nKmTkhu6jgUz87Ht/LSQVH3gXAgMRZhDSAhs9UA+t5O7MDQaRuvWEmzt8iw6/xQ6i\nXqKFUq44frNxyfLlTjJK6sphxcHT8gVxbsUxx2HFU2BrFCxlG6QoKIyD0Z+GHTkR\neZGPW+g8gQsWA0UNN/pNrN/cBe15q6eQBio8g4fjtbp22b/RQFFU+h/FvJifzuVs\nkpVenN2J9ox1EZzXy8/gyStjAbDWQBGJVlDnw10o/CgWbuMCVovwejOxUbXbgyf0\nKGVJyFHtFOXxRpt7d0ZVsyyknNyDeYiNyMDeTiHuh/Mxv0fFEFmvFwgKwhDic6+d\ntFYc3cv/E81n7diMwm2XpCIC2y94ow4ncQvDTkaWvUrSUjTms3nF6L4DL/qUBLgW\npB5PzeIZcZ9FdeUOJzg07OrkdJdwmwV4mSUGvxM+bhBsr7YfwJ+eOUsnYFSb4OPu\nIITkbQhaduUNFO3N2YJRx26gwbrJ+/IJ1rAn5ombVqMsyDjqoTKS4asKmelIpYO1\nisYfBrFNaB+9JtFyFiBg7Zw66Wic5tdPNn3PK/iVVoMp6w3IT+QDFDc1ZCs5Wm7x\nBt/GNd10M7w3N8K1BC7uEH4vAAwk9+iUsLbQHdLmWUW6gYDl\n-END
>  
> CERTIFICATE-\n","driver":"lxc","driver_version":"2.0.0.rc12","kernel":"Linux","kernel_architecture":"x86_64","kernel_version":"4.2.0-27-generic","server":"lxd","server_pid":6383,"server_version":"2.0.0.rc5","storage":"zfs","storage_version":"5"},"public":false}}
> 
> DBUG[03-23|16:29:16] Raw response: 
> {"type":"sync","status":"Success","status_code":200,"metadata":{"api_extensions":[],"api_status":"development","api_version":"1.0","auth":"trusted","config":{"storage.zfs_pool_name":"lxd-pool"},"environment":{"addresses":[],"architectures":["x86_64","i686"],"certificate":"-BEGIN
>  
> 

Re: [lxc-users] lxc progress and a few questions

2016-03-28 Thread Tycho Andersen
On Sun, Mar 27, 2016 at 09:22:44PM -0700, jjs - mainphrame wrote:
> You've had some success with live migrations? At any rate, I'm upgrading my
> ubuntu 15.10 test box to 16.04 so that I'll have 2 of them. I'll give it a
> whirl.

Yes, there are still various unsupported kernel features, but it
should work out of the box for a modern stock xenial image on a xenial
host. There is one more xenial kernel patch which will eventually be
released that will break things, but I'm in the process of upstreaming
some CRIU patches to handle that case, and we'll distro patch those
when they're ready.

Tycho

> On Sun, Mar 27, 2016 at 9:20 PM, Fajar A. Nugraha  wrote:
> 
> > On Sun, Mar 27, 2016 at 11:31 PM, jjs - mainphrame 
> > wrote:
> > > The 2nd link you sent seems to indicate that
> > > live migration wants to work, but I haven't been able to find any reports
> > > from normal users in the field who've actually succeeded with live
> > > migration. if I've missed something, please let me know.
> >
> > My best advice is to try yourself with latest 16.04 daily.
> >
> > I've had both success and failure with it.
> >
> > --
> > Fajar
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Newb questions

2016-03-19 Thread Tycho Andersen
On Wed, Mar 16, 2016 at 02:06:17AM +, Will Dennis wrote:
>
> root@xenial-02:~# lxc list all
> +---+--+-+---+--++---+
> |   HOST|   NAME   |  STATE  |   IPV4| IPV6 |TYPE| 
> SNAPSHOTS |
> +---+--+-+---+--++---+
> | xenial-01 | u1404-03 | RUNNING | 10.0.3.134 (eth0) |  | PERSISTENT | 0  
>|
> +---+--+-+---+--++---+
> | xenial-02 | u1404-01 | RUNNING | 10.0.3.221 (eth0) |  | PERSISTENT | 0  
>|
> +---+--+-+---+--++---+
> | xenial-02 | u1404-02 | RUNNING | 10.0.3.75 (eth0)  |  | PERSISTENT | 0  
>|
> +---+--+-+---+--++---+
> 
> So do you have to query the hosts one by one, or is there something to give 
> you a holistic view of all your container hosts and containers on them?

No, you need to query the hosts one by one.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD_LVM_LVSIZE envvar usage

2016-03-14 Thread Tycho Andersen
On Mon, Mar 14, 2016 at 04:55:21PM +0100, Fabrizio Furnari wrote:
> I've also tried to export it into the /etc/init.d/lxd script in the same
> way, but seems not to apply...Is it possible that applies only during the
> LV thinpool creation time (LXDPool)?

Based on my read of the code it should be applied on every `lvcreate`.
You might check:

strings /proc/`pidof lxd`/environ

to make sure it actually got set, though.

Tycho

> On Mon, Mar 14, 2016 at 4:51 PM, Tycho Andersen <
> tycho.ander...@canonical.com> wrote:
> 
> > On Mon, Mar 14, 2016 at 03:14:31PM +0100, Fabrizio Furnari wrote:
> > > Hi all,
> > > I've just seen that in the latest RC developers added the possibility to
> > > specify the LV size when creating new containers.
> > > I've updated to 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 on my Ubuntu box
> > but
> > > when I try for example:
> > >
> > > $ export LXD_LVM_LVSIZE="1GiB"
> > > $ lxc launch 0d07f11f3f2a testcontainer
> > > $ lvdisplay
> > >
> > /dev/vg0/0d07f11f3f2a0804f501967d278d72a8122b1ec49f01aae4483a41fd9fb546f3 |
> > > grep Size
> > >
> > >   LV Size10.00 GiB
> > >
> > > Still uses the default value. What I have to do to set it?
> >
> > It needs to be set in LXD's environment, not the client's.
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD_LVM_LVSIZE envvar usage

2016-03-14 Thread Tycho Andersen
On Mon, Mar 14, 2016 at 03:14:31PM +0100, Fabrizio Furnari wrote:
> Hi all,
> I've just seen that in the latest RC developers added the possibility to
> specify the LV size when creating new containers.
> I've updated to 2.0.0~rc3-0ubuntu1~ubuntu14.04.1~ppa1 on my Ubuntu box but
> when I try for example:
> 
> $ export LXD_LVM_LVSIZE="1GiB"
> $ lxc launch 0d07f11f3f2a testcontainer
> $ lvdisplay
> /dev/vg0/0d07f11f3f2a0804f501967d278d72a8122b1ec49f01aae4483a41fd9fb546f3 |
> grep Size
> 
>   LV Size10.00 GiB
> 
> Still uses the default value. What I have to do to set it?

It needs to be set in LXD's environment, not the client's.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] DRI in LXD

2016-03-01 Thread Tycho Andersen
On Tue, Mar 01, 2016 at 08:58:16AM -0500, Pete Osborne wrote:
> Hi,
> 
> I've been using LXD for a few months now and really like how it's shaping
> up. I would like to know if it's possible to run opengl applications within
> an LXD container sharing the hosts' X server? The primary function is to
> build and test Qt5 based applications that use open gl.
> 
> I've found several guides on how to set it up in an LXC container but i'd
> prefer to use LXD. I'm using a privielged container, and I attempted
> creating all the device files (/dev/dri/card0, /dev/video0, dev/nvidia*,
> /dev/fb0, dev/tty* ) from withing my container using mknod, but I think I
> need some way of using a bind mount.
> 
> Can anyone provide some advice?

You probably want LXD's device configuration, it is detailed here:

https://github.com/lxc/lxd/blob/master/specs/configuration.md#devices-configuration

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD default NS mappings

2016-01-28 Thread Tycho Andersen
On Wed, Jan 27, 2016 at 05:09:52PM +0100, david.an...@bli.uzh.ch wrote:
> Hi
> 
> I have noticed that LXD uses some UIDs/GIDs by default I haven't set up and 
> which aren't represented in /etc/sub[ug]id files.
> Interestingly, they are different from instance to instance: one one root is 
> mapped on 1'000'000 (not 100'000), on another it's 265'536.
> Now when I copy the rootfs of a container offline between different LXD or 
> LXC instances according to 
> http://stackoverflow.com/questions/33377916/migrating-lxc-to-lxd doesn't the 
> UID/GID mapping need to be the same?

No, the maps can be different.

> If not, why not?

Because LXD uidshifts the filesystems for you when you do `lxc copy`.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] compile lxd

2016-01-12 Thread Tycho Andersen
On Tue, Jan 12, 2016 at 01:25:27PM +, mattias jonsson wrote:
> How to do it?
> There is no configure

It's a go program, so there is no autoconf. See the readme:

https://github.com/lxc/lxd#building-from-source

> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc snapshot ... --stateful - "read-only file system"

2016-01-09 Thread Tycho Andersen
On Sat, Jan 09, 2016 at 06:58:16PM +0900, Tomasz Chmielewski wrote:
> I'm trying to do a stateful snapshot - unfortunately, it fails:
> 
> # lxc snapshot odoo08 "test"  --stateful
> error: mkdir /var/lib/lxd/snapshots/odoo08/test/state: read-only file system

Looks like https://github.com/lxc/lxd/issues/1485, are you on btrfs as
well?

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Help, containers gone after upgrade to 0.24

2015-12-17 Thread Tycho Andersen
On Thu, Dec 17, 2015 at 09:00:08AM +, Jamie Brown wrote:
> OK – so I’ve now discovered the database file is simply sqlite – that’s a 
> bonus.
> 
> Running SELECT * FROM containers shows the obvious problem that the 
> “architecture” column for the older containers is set to 0, whereas the ones 
> created in a later release (I believe 0.23) were set as 2.
> 
> I’m not sure if the solution is quite as simple as setting this value to 2, 
> but I guess I can at least delete the entries from the database myself to 
> tidy things up.

Yes, I think you can just set the value to 2.

It looks like this is probably a bug in the upgrade process (hence no
warning in the release notes :); we should do something like set every
architecture that was previously UNKNOWN to whatever the current arch
is in order to allow them to start.

> I also suppose the architecture value matches a value in the following 
> enumeration;
> 
> const (
> ARCH_UNKNOWN = 0
> ARCH_32BIT_INTEL_X86 = 1
> ARCH_64BIT_INTEL_X86 = 2
> ARCH_32BIT_ARMV7_LITTLE_ENDIAN   = 3
> ARCH_64BIT_ARMV8_LITTLE_ENDIAN   = 4
> ARCH_32BIT_POWERPC_BIG_ENDIAN= 5
> ARCH_64BIT_POWERPC_BIG_ENDIAN= 6
> ARCH_64BIT_POWERPC_LITTLE_ENDIAN = 7
> ARCH_64BIT_S390_BIG_ENDIAN   = 8
> )
> 
> Jamie
> 
> From: Jamie Brown >
> Date: Thursday, 17 December 2015 at 08:33
> To: LXC users mailing-list 
> >
> Subject: Re: Help, containers gone after upgrade to 0.24
> 
> I’ve (at least) managed to get my containers back up and running by creating 
> new ones and copying the file stores.
> 
> However, the annoying thing is I can’t even delete the old containers even 
> though they don’t appear in the list. Any command I try to issue just returns 
> the architecture not supported message. But it won’t let me re-use the 
> container name because it (correctly) believes it already exists.
> 
> So for now I’m stuck with all my containers being called 1.
> 
> Whilst I appreciate the frequent releases and that we’re still <1.0, a 
> warning in the release notes about this would have been very helpful!
> 
> Is there a way to somehow force LXC to delete the incompatible containers? 
> This is one of the major issues with the mysterious “lxd.db” file, unless I’m 
> mistaken there is no easy way to open the database and tidy things up when 
> things don’t quite go to plan. I’m guessing if I could read and edit this 
> file, I probably could have manually set the architecture for the containers 
> and got them back up and running?
> 
> Cheers,
> 
> Jamie
> 
> From: Jamie Brown >
> Date: Thursday, 17 December 2015 at 08:05
> To: LXC users mailing-list 
> >
> Subject: Help, containers gone after upgrade to 0.24
> 
> Hello,
> 
> I’ve just upgraded to 0.24 from 0.23 and most of my containers have 
> disappeared from lxc list and are not starting up. The two that remain in the 
> list were those added most recently (probably in 0.23), the others were added 
> in an earlier release.
> 
> I guess there is a problem with backwards compatibility with a configuration.
> 
> If I run lxc config show  on a missing container, it returns;
> 
> error: Architecture isn't supported: 0
> 
> Looking at the lxc.conf in the logs directories, there are some missing 
> properties for these containers verses the newer containers, one of these 
> properties being lxc.arch. However, I don’t know how I can set these 
> properties for the missing containers.
> 
> The following properties are missing from the older lxc.conf files:
> 
>   *   Lxc.arch
>   *   Lxc.hook.pre-start
>   *   Lxc.hook.post-stop
> 
> Please help!
> 
> Many thanks,
> 
> Jamie

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Migrating LXD VMs/containers between nodes

2015-11-27 Thread Tycho Andersen
On Thu, Nov 26, 2015 at 09:50:59AM -0200, Dilvan Moreira wrote:
> On Wed, Nov 25, 2015 at 3:50 PM, Serge Hallyn 
> > The biggest complication I see is that when you ask lxd to move a
> > container,
> > lxd will want to migrate the rootfs.  I'm waiting for clarification from
> > Tycho on whether lxd will be smart enough to DTRT.
> >
> 
>Using Ceph/rbd, there is no need to migrate anything. I am not aiming to
> have live migration, so the plan is to shutdown the container and start it
> agin in another node.

Migrating containers live or non-live uses the same filesystem backing
code to do it, since it needs to migrate the filesystem in both cases.
I have a branch that makes smarter per-backend decisions when you are
using a filesystem like ZFS or BTRFS, but it still has some bugs that
need to be worked out and I haven't gotten much time to work on it
recently.

We don't have a ceph (or block) backend today, so as Serge says there
is no way for LXD to understand that it shouldn't try to move the FS
today. I think we'd be happy to accept such a backend as a patch,
though :)

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-23 Thread Tycho Andersen
Hi Jamie,

On Thu, Nov 05, 2015 at 11:39:43AM +, Jamie Brown wrote:
> Hello again,
> 
> Oddly, I've now re-installed the old server and configured it identically to 
> before (except now using RAID) and tried migrating a container back and I am 
> getting a different failure;
> 
> # lxc move host2:test host1:test
> 
> error: Error transferring container data: restore failed:
> (00.007414)  1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts: 
> No such file or directory
> (00.026443) Error (cr-restore.c:1939): Restoring FAILED.

I just hit another related bug, and I believe:

https://github.com/lxc/lxd/pull/1340

should fix them both.

Thanks,

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LinkedIn article about LXD

2015-11-11 Thread Tycho Andersen
On Wed, Nov 11, 2015 at 09:33:30AM +, Jamie Brown wrote:
> Hi,
> 
> I wanted to share an article I posted to LinkedIn this morning about LXD. 
> 
> Being for LinkedIn, I kept it fairly abstract. It is targeted towards 
> planting a seed for decision makers, rather than an in-depth technical review 
> of the technology.
> 
> https://www.linkedin.com/pulse/finally-private-cloud-platform-sme-jamie-brown
> 
> If you any feedback or corrections, please let me know. If you think it’s any 
> good, please share with your network, like and comment!

Cool, thanks! I added it to the website here:

https://github.com/lxc/linuxcontainers.org/pull/120

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Ownership changes after container move

2015-11-10 Thread Tycho Andersen
Hi Jamie,

On Tue, Nov 10, 2015 at 09:10:22AM +, Jamie Brown wrote:
> Hi,
> 
> I’ve discovered that some file ownership changes have occurred after moving 
> stopped containers between hosts.
> 
> Prior to the move there were various user directories (e.g. “/home/jamie”) 
> with ownership set to jamie:jamie. After moving, the ownership was changed to 
> ubuntu:ubuntu.

Sounds like a pretty serious bug. Do you happen to know how to
reproduce the problem?

Thanks,

Tycho

> I discovered the issue when attempting to SSH to the moved host and was 
> prompted to enter my password as I no longer owned my authorized_keys file.
> 
> I will try to repeat this, but I can confirm it has happened on multiple 
> containers.
> 
> — Jamie

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Fri, Nov 06, 2015 at 08:43:33AM +, Jamie Brown wrote:
> I’ve just discovered a new failure on a different container too;
> 
> # lxc move host2:nexus host1:nexus
> error: Error transferring container data: checkpoint failed:
> (00.355457) Error (files-reg.c:422): Can't dump ghost file 
> /usr/local/sonatype-work/nexus/tmp/jar_cache5838699621686145685.tmp of 
> 1177738 size, increase limit
> (00.355477) Error (cr-dump.c:1255): Dump files (pid: 22072) failed with -1
> (00.357100) Error (cr-dump.c:1617): Dumping FAILED.

So this is actually an error because a default limit in criu is not
high enough. You can set this via the --ghost-limit in criu, but LXC
currently exposes no way to set this, although I'm hoping to add a new
API call to allow people to set stuff like this in the near future.

Thanks,

Tycho

> 
> 
> 
> On 06/11/2015, 08:40, "lxc-users on behalf of Jamie Brown" 
> <lxc-users-boun...@lists.linuxcontainers.org on behalf of 
> jamie.br...@mpec.co.uk> wrote:
> 
> >Tycho,
> >
> >Thanks for your help.
> >
> >The kernels were in fact different versions, though I’m not sure how I got 
> >into that state! So they’re now both running 3.19.0.
> >
> >Now, I at least receive the same error when migrating in both directions;
> ># lxc move host2:test host1:test2
> >error: Error transferring container data: restore failed:
> >(00.008103)  1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts: 
> >No such file or directory
> >
> ># lxc move host1:test1 host2:test1
> >error: Error transferring container data: restore failed:
> >(00.008103) 1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts: No 
> >such file or directory
> >
> >
> >
> >
> >The backing store is the default (directory based). However, on host2 the 
> >/var/lib/lxd/containers directory is a symlink to an ext3 mount. On host1 
> >they’re on ext4, is that likely to cause any issues?
> >
> >The strange thing is, [randomly] the live move DOES succeed. I’ve definitely 
> >migrated a clean [running] container about 3 times from host2 to host1, but 
> >then when I try again with a new container it fails. This even worked before 
> >I updated the kernel. However, I can’t seem to find specific steps to 
> >replicate the successful move. I’ve never succeeded in migrating the same 
> >container back from host1 to host2 without stopping it. This is what is 
> >concerning me the most, I would expect either permanent failure or permanent 
> >success. I keep gaining false hope because the first time I migrated a 
> >container after updating the kernel it worked, so I thought, problem solved! 
> >But then I couldn’t migrate another :(
> >
> >-- Jamie
> >
> >
> >
> >05/11/2015, 16:58, "lxc-users on behalf of Tycho Andersen" 
> ><lxc-users-boun...@lists.linuxcontainers.org on behalf of 
> >tycho.ander...@canonical.com> wrote:
> >
> >>Hi Jamie,
> >>
> >>Thanks for trying it out.
> >>
> >>On Thu, Nov 05, 2015 at 11:39:43AM +, Jamie Brown wrote:
> >>> Hello again,
> >>> 
> >>> Oddly, I've now re-installed the old server and configured it identically 
> >>> to before (except now using RAID) and tried migrating a container back 
> >>> and I am getting a different failure;
> >>> 
> >>> # lxc move host2:test host1:test
> >>> 
> >>> error: Error transferring container data: restore failed:
> >>> (00.007414)  1: Error (mount.c:2030): Can't mount at 
> >>> ./dev/.lxd-mounts: No such file or directory
> >>> (00.026443) Error (cr-restore.c:1939): Restoring FAILED.
> >>> 
> >>> The container appears in the remote container list whilst moving, but 
> >>> then after failure it is deleted and it is in the STOPPED state on the 
> >>> source host.
> >>
> >>Right, the restore failed, so the container had already been stopped
> >>from the dump, so it was stopped on the target. What we should really
> >>do is leave it in a frozen state after the dump, and once the restore
> >>succeeds then we can kill it. Hopefully that's something I can
> >>implement this cycle.
> >>
> >>As for the actual error, sounds like the target LXD didn't have
> >>shmounts but the source one did. Are they using different backing
> >>stores? What version of LXD are they?
> >>
> >>> 
> >>> Here's the output from the log, not sure how much is relevant to the 
> >>> migration attempt.

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Mon, Nov 09, 2015 at 10:37:42AM -0500, Saint Michael wrote:
> I must assume that LXC is not ready for production yet. Am I wrong?

Yes, LXC has been used in production by many large organizations for
many years.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] 1.1.5 setproctitle bug

2015-11-09 Thread Tycho Andersen
Hello Boštjan,

On Mon, Nov 09, 2015 at 06:47:42PM +0100, Boštjan Škufca @ Teon.si wrote:
> Containers start, but this is what I am getting:
> lxc-start: utils.c: setproctitle: 1461 Invalid argument - setting cmdline
> failed
> 
> Kernel 4.2.5, on Slackware 14.1, no cgmanager or lxcfs. Is there anything
> missing?

No, this is a non-fatal error, so you're just fine. I sent a patch to
lxc-devel to turn it down to an info message at Stéphane's request
because he was worried it might freak people out, and it seems he was
right :)

If you want the fancy proctitles, then you need to enable
CONFIG_CHECKPOINT_RESTORE in your kernel.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Mon, Nov 09, 2015 at 11:57:53AM -0500, Saint Michael wrote:
> I meant LXD, not LXC. I do use LXC in production.

LXD will be ready for production in 16.04, the next LTS of Ubuntu.

For live migration, we'll have support for it in 16.04 including
migrating all the security primitives of containers. However, there
will absolutely be bits of migration that simply aren't done, so
success with it will be workload dependent. Of course I'll continue to
work to implement these cases, so the situation will continue to
improve as we go along.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Mon, Nov 09, 2015 at 04:32:00PM +, Jamie Brown wrote:
> Not sure why my this thread is being hijacked to ask very general questions 
> :) please don't confuse those with me ;)

Definitely not :)

> Tycho,
> 
> Thanks for the response regarding criu, I figured this may be the case from 
> inspecting the criu source, but wasn't sure how this could be configured and 
> whether there were known limitations. Considering criu has some sort of 
> caching of hard/soft links, could this potentially use a lot of RAM during 
> the snapshot phase if this limit were to be heavily increased?

I think it's mostly to avoid puting GBs of files into criu's images,
which seems like a somewhat artificial concern to me :). For LXC 2.0
(planned for 16.04) I'm going to add a new API function that will let
you configure a lot of this stuff and have a more extensible API than
->checkpoint now.

> I also wondered if you had any response to the mounting issue using ext3/ext4 
> I sent the other day?

You're talking about the .lxd-mounts failure? I've thought about it,
but I can't understand how it's happening. I've heard off-list of
several other people with the issue, though, can you send

cat /proc//mountinfo

of the target LXD?

> I'm finding it odd that I get random successful migrations, but then can't 
> replicate it. I'm always just testing with simple fresh Ubuntu containers. 
> The container that caused the criu error was an exception to this.

By criu error, you mean the ghost file size error?

Tycho

> Many thanks,
> 
> Jamie
> 
> From: lxc-users <lxc-users-boun...@lists.linuxcontainers.org> on behalf of 
> Tycho Andersen <tycho.ander...@canonical.com>
> Sent: 09 November 2015 16:00:01
> To: LXC users mailing-list
> Subject: Re: [lxc-users] LXD Live Migration
> 
> On Mon, Nov 09, 2015 at 10:37:42AM -0500, Saint Michael wrote:
> > I must assume that LXC is not ready for production yet. Am I wrong?
> 
> Yes, LXC has been used in production by many large organizations for
> many years.
> 
> Tycho
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-05 Thread Tycho Andersen
Hi Jamie,

Thanks for trying it out.

On Thu, Nov 05, 2015 at 11:39:43AM +, Jamie Brown wrote:
> Hello again,
> 
> Oddly, I've now re-installed the old server and configured it identically to 
> before (except now using RAID) and tried migrating a container back and I am 
> getting a different failure;
> 
> # lxc move host2:test host1:test
> 
> error: Error transferring container data: restore failed:
> (00.007414)  1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts: 
> No such file or directory
> (00.026443) Error (cr-restore.c:1939): Restoring FAILED.
> 
> The container appears in the remote container list whilst moving, but then 
> after failure it is deleted and it is in the STOPPED state on the source host.

Right, the restore failed, so the container had already been stopped
from the dump, so it was stopped on the target. What we should really
do is leave it in a frozen state after the dump, and once the restore
succeeds then we can kill it. Hopefully that's something I can
implement this cycle.

As for the actual error, sounds like the target LXD didn't have
shmounts but the source one did. Are they using different backing
stores? What version of LXD are they?

> 
> Here's the output from the log, not sure how much is relevant to the 
> migration attempt.
> 
> # lxc info --show-log test
> ...
> lxc 1446723150.396 DEBUGlxc_start - start.c:__lxc_start:1210 - unknown 
> exit status for init: 9
> lxc 1446723150.396 DEBUGlxc_start - start.c:__lxc_start:1215 
> - Pushing physical nics back to host namespace
> lxc 1446723150.396 DEBUGlxc_start - start.c:__lxc_start:1218 
> - Tearing down virtual network devices used by container
> lxc 1446723150.396 WARN lxc_conf - 
> conf.c:lxc_delete_network:2939 - failed to remove interface '(null)'
> lxc 1446723150.396 INFO lxc_error - 
> error.c:lxc_error_set_and_log:55 - child <10499> ended on signal (9)
> lxc 1446723150.396 WARN lxc_conf - 
> conf.c:lxc_delete_network:2939 - failed to remove interface '(null)'
> lxc 1446723295.520 WARN lxc_cgmanager - 
> cgmanager.c:cgm_get:993 - do_cgm_get exited with error
> lxc 1446723295.522 WARN lxc_cgmanager - 
> cgmanager.c:cgm_get:993 - do_cgm_get exited with error
> 
> 
> If I try to migrate a container in the reverse direction, I get a similar 
> error;
> 
> # lxc move host1:test1 host2:test1
> error: Error transferring container data: restore failed:
> (00.001093) Error (cgroup.c:1204): cg:Can't mount controller dir 
> .criu.cgyard.aOuQtF/net_cls: No such file or directory

This is probably because the kernel on host1 is newer than the
kernel on host2 and has net_cls cgroup support where as host2's
doesn't.

Tycho

> 
> 
> 
> Any ideas?
> 
> -- Jamie
> 
> 
> 
> On 05/11/2015, 08:05, "lxc-users on behalf of Jamie Brown" 
> <lxc-users-boun...@lists.linuxcontainers.org on behalf of 
> jamie.br...@mpec.co.uk> wrote:
> 
> >Thanks Tycho, installing CRIU solved the problem;
> >
> ># apt-get install criu
> >
> >Should this package not be included as a dependency for LXD, or at least 
> >provide a meaningful warning if the package isn’t available? It seems odd to 
> >advertise out-the-box live migration in LXD, but then have to install 
> >another package to provide it.
> >
> >Is this in the documentation anywhere?
> >
> >Thanks again.
> >
> >-- Jamie
> >
> >
> >
> >
> >On 04/11/2015, 16:47, "lxc-users on behalf of Tycho Andersen" 
> ><lxc-users-boun...@lists.linuxcontainers.org on behalf of 
> >tycho.ander...@canonical.com> wrote:
> >
> >>On Wed, Nov 04, 2015 at 01:48:44PM +, Jamie Brown wrote:
> >>> Greetings all.
> >>> 
> >>> I’ve been using LXD in a development environment for a few weeks and so 
> >>> far very impressed, 
> >>> I can see a really bright future for this technology!
> >>> 
> >>> However, today I thought I’d try out the live migration, based on the 
> >>> following guide;
> >>> https://insights.ubuntu.com/2015/05/06/live-migration-in-lxd/
> >>> 
> >>> I believe I have followed the steps correctly, however when I run the 
> >>> move command, I 
> >>> receive the following output;
> >>> 
> >>> # lxc move host1:test host2:test
> >>> error: Error transferring container data: checkpoint failed:
> >>> Problem accessing CRIU log: open /tmp/lxd_migration_899480871/dump.log: 
> >>> no such file or direc

Re: [lxc-users] LXD Live Migration

2015-11-05 Thread Tycho Andersen
On Thu, Nov 05, 2015 at 08:05:03AM +, Jamie Brown wrote:
> Thanks Tycho, installing CRIU solved the problem;
> 
> # apt-get install criu
> 
> Should this package not be included as a dependency for LXD, or at least 
> provide a meaningful warning if the package isn’t available?

criu is listed in Suggests:, but we can't list it in Recommends:
because it's not also in main (but LXD is).

I did send a branch to render a better error message based exactly on
this thread yesterday, so the next version of LXD will behave a little
nicer:

https://github.com/lxc/lxd/pull/1270

> It seems odd to advertise out-the-box live migration in LXD, but then have to 
> install another package to provide it.
> 
> Is this in the documentation anywhere?

Probably not. I'll see about adding it.

Thanks,
Tycho

> Thanks again.
> 
> -- Jamie
> 
> 
> 
> 
> On 04/11/2015, 16:47, "lxc-users on behalf of Tycho Andersen" 
> <lxc-users-boun...@lists.linuxcontainers.org on behalf of 
> tycho.ander...@canonical.com> wrote:
> 
> >On Wed, Nov 04, 2015 at 01:48:44PM +, Jamie Brown wrote:
> >> Greetings all.
> >> 
> >> I’ve been using LXD in a development environment for a few weeks and so 
> >> far very impressed, 
> >> I can see a really bright future for this technology!
> >> 
> >> However, today I thought I’d try out the live migration, based on the 
> >> following guide;
> >> https://insights.ubuntu.com/2015/05/06/live-migration-in-lxd/
> >> 
> >> I believe I have followed the steps correctly, however when I run the move 
> >> command, I 
> >> receive the following output;
> >> 
> >> # lxc move host1:test host2:test
> >> error: Error transferring container data: checkpoint failed:
> >> Problem accessing CRIU log: open /tmp/lxd_migration_899480871/dump.log: no 
> >> such file or directory
> >> 
> >> The file it is referring to above doesn't exist. However, there are other 
> >> lxd_migration_* 
> >> directories with different numbers appended. Each time I attempt the 
> >> migration a new directory 
> >> is created (e.g. lxd_migration_192965652), but there is no dump.log in 
> >> there.
> >> 
> >> The migration doesn't create a log file as per the guide above in;
> >> /var/log/lxd/test/migration_{dump|restore}_.log
> >> 
> >> Steps I've taken;
> >> 
> >> - Copied all profiles from host1 to host2
> >> - Added the migratable profile to the container
> >> - Removed lxcfs package (on both hosts)
> >> - Added the remote HTTPS hosts for both the local and remote hosts
> >> 
> >> Both hosts are running Ubuntu 14.04.3 LTS (x64) with LXD version 0.21.
> >> 
> >> The only difference I can tell between my hosts and the guide is that the 
> >> 'migratable'
> >> profile (which came out-the-box with my LXD installation) doesn't contain 
> >> the autostart
> >> entries as in the guide above;
> >> 
> >> # lxc profile show migratable
> >> name: migratable
> >> config:
> >>   raw.lxc: |-
> >> lxc.console = none
> >> lxc.cgroup.devices.deny = c 5:1 rwm
> >> lxc.seccomp =
> >>   security.privileged: "true"
> >> devices: {}
> >> 
> >> 
> >> Any help would be much appreciated!
> >
> >Have you installed CRIU? lxc info --show-log test probably has more
> >info about what failed, but my guess is that it can't find CRIU if you
> >haven't installed it.
> >
> >Tycho
> >
> >> Thank you,
> >> 
> >> Jamie
> >> 
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >___
> >lxc-users mailing list
> >lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-04 Thread Tycho Andersen
On Wed, Nov 04, 2015 at 01:48:44PM +, Jamie Brown wrote:
> Greetings all.
> 
> I’ve been using LXD in a development environment for a few weeks and so far 
> very impressed, 
> I can see a really bright future for this technology!
> 
> However, today I thought I’d try out the live migration, based on the 
> following guide;
> https://insights.ubuntu.com/2015/05/06/live-migration-in-lxd/
> 
> I believe I have followed the steps correctly, however when I run the move 
> command, I 
> receive the following output;
> 
> # lxc move host1:test host2:test
> error: Error transferring container data: checkpoint failed:
> Problem accessing CRIU log: open /tmp/lxd_migration_899480871/dump.log: no 
> such file or directory
> 
> The file it is referring to above doesn't exist. However, there are other 
> lxd_migration_* 
> directories with different numbers appended. Each time I attempt the 
> migration a new directory 
> is created (e.g. lxd_migration_192965652), but there is no dump.log in there.
> 
> The migration doesn't create a log file as per the guide above in;
> /var/log/lxd/test/migration_{dump|restore}_.log
> 
> Steps I've taken;
> 
> - Copied all profiles from host1 to host2
> - Added the migratable profile to the container
> - Removed lxcfs package (on both hosts)
> - Added the remote HTTPS hosts for both the local and remote hosts
> 
> Both hosts are running Ubuntu 14.04.3 LTS (x64) with LXD version 0.21.
> 
> The only difference I can tell between my hosts and the guide is that the 
> 'migratable'
> profile (which came out-the-box with my LXD installation) doesn't contain the 
> autostart
> entries as in the guide above;
> 
> # lxc profile show migratable
> name: migratable
> config:
>   raw.lxc: |-
> lxc.console = none
> lxc.cgroup.devices.deny = c 5:1 rwm
> lxc.seccomp =
>   security.privileged: "true"
> devices: {}
> 
> 
> Any help would be much appreciated!

Have you installed CRIU? lxc info --show-log test probably has more
info about what failed, but my guess is that it can't find CRIU if you
haven't installed it.

Tycho

> Thank you,
> 
> Jamie
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Help with lxc-checkpoint and SECCOMP_MODE_FILTER

2015-10-28 Thread Tycho Andersen
Hi Marcelo,

On Mon, Oct 26, 2015 at 06:34:38PM -0200, Gter Marcelo wrote:
> Hi,
> 
> i am running testing with new version the LXC 1.1.4 in enviroment ubuntu
> 15.10 .
> 
> But i received some errors when i try to use lxc-checkpoint.
> 
> I ran the command :
> lxc-checkpoint -v -D /tmp -n test-container -v
> 
> I received de Log error :
> 
> (00.000620) cg: Set 1 is criu one
> (00.000739) Error (proc_parse.c:827): SECCOMP_MODE_FILTER not currently
> supported

Kernel support for dumping seccomp mode filter was just merged into
net-next yesterday, so I'll send my CRIU patches out shortly to that
list. For now, you have to disable seccomp filtering:

lxc.seccomp =

(with the blank line after the equals.) This will be sorted by 16.04,
though.

Tycho

> (00.000743) Error (proc_parse.c:840): Error parsing proc status file
> (00.000772) Unfreezing tasks into 1
> (00.000786) Unseizing 522 into 1
> (00.000793) Error (ptrace.c:43): Unable to detach from 522: No such process
> (00.000797) Unlock network
> (00.000801) Unfreezing tasks into 1
> (00.000803) Unseizing 522 into 1
> (00.000806) Error (ptrace.c:43): Unable to detach from 522: No such process
> (00.000832) Error (cr-dump.c:1617): Dumping FAILED.
> 
> Can someone help me ?
> 
> Thanks a Lot
> Marcelo

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Something changed between 1.1.2 and 1.1.4 for unprivileged containers?

2015-10-15 Thread Tycho Andersen
Hi Dirk,

On Thu, Oct 15, 2015 at 09:22:25PM +0200, Dirk Geschke wrote:
> Hi all,
> 
> I have unprivileged containers running with lxc-1.1.2. They are 
> started by a normal, non-root user and it works. But today I 
> tried to start them with lxc-1.1.4 and it fails:
> 
>WARN: could not reopen tty: Permission denied
>newuidmap: write to uid_map failed: Operation not permitted
>error mapping child
>setgid: Invalid argument
>lxc-start: conf.c: ttys_shift_ids: 3490 Failed to chown /dev/pts/3
>lxc-start: start.c: lxc_init: 450 Failed to shift tty into container
>lxc-start: start.c: __lxc_start: 1131 failed to initialize the
>container
>lxc-start: lxc_start.c: main: 344 The container failed to start.
>lxc-start: lxc_start.c: main: 348 Additional information can be
>obtained by setting the --logfile and --logpriority options.
> 
> That's strange, if I go back to lxc-1.1.2 it works again. So
> something has changed. Does anyone know, what changed or what
> I have to change in order to get it running with 1.1.4, too?

How are you starting these (hand-built lxd?). lxc 1.1.2 => 1.1.3
reverted an ABI break which could cause some of these problems,
perhaps you're hitting that somehow?

Tycho

> Best regards
> 
> Dirk
> -- 
> +--+
> | Dr. Dirk Geschke   / Plankensteinweg 61/ 85435 Erding|
> | Telefon: 08122-559448  / Mobil: 0176-96906350 / Fax: 08122-9818106   |
> | d...@geschke-online.de / d...@lug-erding.de  / kont...@lug-erding.de |
> +--+
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD

2015-08-31 Thread Tycho Andersen
On Mon, Aug 31, 2015 at 11:02:21AM -0500, Luis M. Ibarra wrote:
> Have you checked juju? I don't think there's a driver for LXD only LXC but
> they have plans to add it.

Yep, I've done the initial implementation here,

https://github.com/juju/juju/pull/2913

> It's web based, but it can scale up.

It's not entirely web based, but it does have a web-based component
(juju-gui). You can see a demo of that here:
https://demo.jujucharms.com/

Tycho

> 2015-08-31 10:50 GMT-05:00 Stéphane Graber :
> 
> > On Mon, Aug 31, 2015 at 11:39:42AM -0400, Federico Alves wrote:
> > > Is there GUI for LXD that can give you a single pane of glass with all
> > your
> > > datacenter, considering you may have hundreds of servers and thousands of
> > > containers?
> > > Sort of Vmware Vcenter app, not web interface,but a real windows app?
> > > I would pay any money for that.
> >
> > Currently our large scale management story is to use LXD with OpenStack
> > through the nova-compute-lxd driver.
> >
> > With it, you can then use any openstack management tool, including the
> > web interface to manage all your running containers.
> >
> > There's however nothing which would prevent anyone from writting a GUI
> > management client for LXD, either based on our existing client libraries
> > (Go and python) or directly talking to the REST API.
> >
> > (Note that nova-compute-lxd is still very much a work in progress)
> >
> > --
> > Stéphane Graber
> > Ubuntu developer
> > http://www.ubuntu.com
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> 
> 
> 
> -- 
> Luis M. Ibarra

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] error Error calling 'lxd forkstart

2015-08-21 Thread Tycho Andersen
On Fri, Aug 21, 2015 at 10:49:50AM -0300, marcelo fortino wrote:
 Hi Tycho,
 so 'sudo service start cgmanager' its gives me:
 start: unrecognized service

Whoops, sorry, that's `sudo service cgmanager start`, I always get the
arguments in the wrong order.

 Looking at google with this new info I found that there is a package confict:
 sudo aptitude install cgmanager cgmanager-utils
 Os NOVOS pacotes a seguir serão instalados:
   cgmanager-utils
 Os pacotes a seguir possuem dependências não satisfeitas:
  cgmanager : Conflita: cgmanager-utils ( 0.30-1) mas 0.24-0ubuntu7.3
 será instalado.
 
 I also did: dpkg -l|awk '$1 ~ /^ii$/  /cgmanager/ {print $2   $3   $4}'
 cgmanager 0.37-1~ubuntu14.04.1~ppa1 amd64
 libcgmanager0:amd64 0.37-1~ubuntu14.04.1~ppa1 amd64
 libcgmanager0:i386 0.37-1~ubuntu14.04.1~ppa1 i386
 
 Any tips to fix this conflict?

Does a,

sudo apt-get update
sudo apt-get install -f

Fix it?

Tycho

 Thanks.
 Marcelo
 
 
 On Thu, Aug 20, 2015 at 02:25:20PM -0300, marcelo fortino wrote:
  Hi Tycho,
  Sorry for the delay, this is the output of `lxc info --show-log 
  documentation`
  Name: documentation
  Status: STOPPED
 
 Looks like cgmanager isn't running or has crashed. If you do a
 `sudo service start cgmanager` and then try and run it?
 
 Tycho
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-users Digest, Vol 88, Issue 4

2015-08-20 Thread Tycho Andersen
 lxc_seccomp -
 seccomp.c:parse_config_v2:410 - Adding native rule for
 open_by_handle_at action 327681
 lxc 1439920109.702 INFO lxc_seccomp -
 seccomp.c:parse_config_v2:413 - Adding compat rule for
 open_by_handle_at action 327681
 lxc 1439920109.702 INFO lxc_seccomp -
 seccomp.c:parse_config_v2:318 - processing: .init_module errno 1.
 lxc 1439920109.702 INFO lxc_seccomp -
 seccomp.c:parse_config_v2:410 - Adding native rule for init_module
 action 327681
 lxc 1439920109.702 INFO lxc_seccomp -
 seccomp.c:parse_config_v2:413 - Adding compat rule for init_module
 action 327681
 lxc 1439928842.344 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type u nsid 0 hostid
 10 range 65536
 lxc 1439928842.374 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type g nsid 0 hostid
 10 range 65536
 lxc 1439931895.964 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type u nsid 0 hostid
 10 range 65536
 lxc 1439931895.964 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type g nsid 0 hostid
 10 range 65536
 lxc 1439931895.966 DEBUGlxc_cgmanager -
 cgmanager.c:cgm_dbus_connect:152 - Failed opening dbus connection:
 org.freedesktop.DBus.Error.NoServer: Failed to connect to socket
 /sys/fs/cgroup/cgmanager/sock: Connection refused
 lxc 1439931895.966 ERRORlxc_cgmanager -
 cgmanager.c:do_cgm_get:876 - Error connecting to cgroup manager
 lxc 1439931895.967 WARN lxc_cgmanager -
 cgmanager.c:cgm_get:993 - do_cgm_get exited with error
 lxc 1440090322.129 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type u nsid 0 hostid
 10 range 65536
 lxc 1440090322.149 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type g nsid 0 hostid
 10 range 65536
 lxc 1440091035.586 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type u nsid 0 hostid
 10 range 65536
 lxc 1440091035.586 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type g nsid 0 hostid
 10 range 65536
 lxc 1440091035.587 WARN lxc_cgmanager -
 cgmanager.c:cgm_get:993 - do_cgm_get exited with error
 lxc 1440091035.589 WARN lxc_cgmanager -
 cgmanager.c:cgm_get:993 - do_cgm_get exited with error
 lxc 1440091054.353 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type u nsid 0 hostid
 10 range 65536
 lxc 1440091054.353 INFO lxc_confile -
 confile.c:config_idmap:1433 - read uid map: type g nsid 0 hostid
 10 range 65536
 lxc 1440091054.355 WARN lxc_cgmanager -
 cgmanager.c:cgm_get:993 - do_cgm_get exited with error
 lxc 1440091054.357 WARN lxc_cgmanager -
 cgmanager.c:cgm_get:993 - do_cgm_get exited with error
 
 
 Any help appreciated
 
 Thanks
 Marcelo
 
 -- Mensaje reenviado --
 From: Tycho Andersen tycho.ander...@canonical.com
 To: LXC users mailing-list lxc-users@lists.linuxcontainers.org
 Cc:
 Date: Tue, 18 Aug 2015 15:59:18 -0700
 Subject: Re: [lxc-users] error Error calling 'lxd forkstart
 documentation /var/lib/lxd/containers
 Hi Marcelo,
 
 On Tue, Aug 18, 2015 at 03:30:04PM -0300, marcelo fortino wrote:
  This morning I did an apt-get update and lxd packages were upgraded.
  since then I can't start any of the container, I had this error:
 
  Error calling 'lxd forkstart documentation /var/lib/lxd/containers.
 
  The lxd.log show this:
  t=2015-08-12T14:10:59-0300 lvl=info msg=LXD is starting.
  t=2015-08-12T14:10:59-0300 lvl=info msg=Default uid/gid map:
  t=2015-08-12T14:10:59-0300 lvl=info msg= - u 0 10 65536
  t=2015-08-12T14:10:59-0300 lvl=info msg= - g 0 10 65536
  t=2015-08-12T14:11:00-0300 lvl=info msg=Init driver=storage/dir
  t=2015-08-12T14:11:02-0300 lvl=info msg=looking for existing
  certificates: cert=/var/lib/lxd/server.crt
  key=/var/lib/lxd/server.key
  t=2015-08-12T14:11:03-0300 lvl=info msg=Init driver=storage/dir
  t=2015-08-12T14:11:03-0300 lvl=info msg=LXD isn't socket activated.
  t=2015-08-12T14:11:03-0300 lvl=info msg= - binding socket
  socket=/var/lib/lxd/unix.socket
  t=2015-08-12T18:15:27-0300 lvl=info msg=Received 'power failure
  signal', shutting down containers.
 
 
  Lxd version 0.15 on Ubuntu 14.04. Any help to fix this?
 
 What's the output of `lxc info --show-log documentation`?
 
 Tycho
 
 2015-08-18 21:35 GMT-03:00  lxc-users-requ...@lists.linuxcontainers.org:
  Send lxc-users mailing list submissions to
  lxc-users@lists.linuxcontainers.org
 
  To subscribe or unsubscribe via the World Wide Web, visit
  http://lists.linuxcontainers.org/listinfo/lxc-users
  or, via email, send a message with subject or body 'help' to
  lxc-users-requ...@lists.linuxcontainers.org
 
  You can reach the person managing

Re: [lxc-users] error Error calling 'lxd forkstart documentation /var/lib/lxd/containers

2015-08-18 Thread Tycho Andersen
Hi Marcelo,

On Tue, Aug 18, 2015 at 03:30:04PM -0300, marcelo fortino wrote:
 This morning I did an apt-get update and lxd packages were upgraded.
 since then I can't start any of the container, I had this error:
 
 Error calling 'lxd forkstart documentation /var/lib/lxd/containers.
 
 The lxd.log show this:
 t=2015-08-12T14:10:59-0300 lvl=info msg=LXD is starting.
 t=2015-08-12T14:10:59-0300 lvl=info msg=Default uid/gid map:
 t=2015-08-12T14:10:59-0300 lvl=info msg= - u 0 10 65536
 t=2015-08-12T14:10:59-0300 lvl=info msg= - g 0 10 65536
 t=2015-08-12T14:11:00-0300 lvl=info msg=Init driver=storage/dir
 t=2015-08-12T14:11:02-0300 lvl=info msg=looking for existing
 certificates: cert=/var/lib/lxd/server.crt
 key=/var/lib/lxd/server.key
 t=2015-08-12T14:11:03-0300 lvl=info msg=Init driver=storage/dir
 t=2015-08-12T14:11:03-0300 lvl=info msg=LXD isn't socket activated.
 t=2015-08-12T14:11:03-0300 lvl=info msg= - binding socket
 socket=/var/lib/lxd/unix.socket
 t=2015-08-12T18:15:27-0300 lvl=info msg=Received 'power failure
 signal', shutting down containers.
 
 
 Lxd version 0.15 on Ubuntu 14.04. Any help to fix this?

What's the output of `lxc info --show-log documentation`?

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Mount directory with space in the path

2015-08-18 Thread Tycho Andersen
On Tue, Aug 18, 2015 at 02:18:05PM +0200, Valerio Mariani wrote:
 Dear Andrey,
 
thanks for your answer. So, I did this (psana is the software I am
 working on):
 
 lxc config device add centos6-amd64-psana opt-working disk
 source=/data/Data/Psana Tests/ path=/opt/working
 
 Then if I try:
 
 lxc config device show centos6-amd64-psana
 
 I see:
 
 ...
 opt-working
   source: /data/Data/Psana Tests/
   type: disk
   path: /opt/working
 ...
 
 However, when I start the container:
 
 lxc start centos6-amd64-psana
 error Error calling 'lxd forkstart centos6-amd64-psana
 /var/lib/lxd/containers /var/log/lxd/centos6-amd64-psana/lxc.conf':
 err='exit status 1'
 
 The log says:
 
 lxc 1439899856.718 ERRORlxc_conf -
 conf.c:mount_entry:1720 - No such file or directory - failed to mount
 '/data/Data/Psana' on '/usr/lib/x86_64-linux-gnu/lxc/Tests/'
 lxc 1439899856.718 ERRORlxc_conf - conf.c:lxc_setup:3801
 - failed to setup the mount entries for 'centos6-amd64-psana'
 
 So, when it tries to mount the directory, it cuts the path at the first
 space... then takes the second part of the string as the target path for
 the mount.
 
 Should I report this as a bug on the gitHub page?

Thanks, I think this is a bug in LXD, can you try:

https://github.com/tych0/lxd/commit/08f4e8580c42fc38063b9dfa53dc2e6550a0ed6c

and see if that fixes it?

Thanks,

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-13 Thread Tycho Andersen
Hi Dietmar,

On Thu, Aug 13, 2015 at 07:40:27AM +0200, Dietmar Maurer wrote:
   Just found this info: 
   http://permalink.gmane.org/gmane.linux.kernel/1930352
   
   Now when mounting /sys/kernel/debug, tracefs is automatically mounted
   in /sys/kernel/debug/tracing
   
   Would that explain the behavior?
  
  Yep, I think that's it. Does the patch I sent work for you?
 
 Sorry for the delay, but it seems we are in different time zones.

No problem.

 I just tested your patch, and it seems to solve the tracefs 
 problem - thanks.
 
 Now I no get another error:
 
 (00.005871) irmap: Refresh stat for /etc/group-
 (00.005873) irmap: Refresh stat for /etc/init
 (00.005875) irmap:Scanned /etc/init
 (00.005876) fsnotify: Dumping /etc/init as path for handle
 (00.005879) fsnotify: id 0x0005 flags 0x00080800
 (00.005882) fdinfo: type: 0x 8 flags: 02004000/01 pos: 0x   0 fd: 6
 (00.005891) 13939 fdinfo 7: pos: 0x   0 flags:  
 2004002/0x1
 (00.005896)   Searching for socket b17c7f (family 1)
 (00.005900) No filter for socket
 (00.005901) sk unix: Dumping unix socket at 7
 (00.005902) sk unix:  Dumping: ino 0xb17c7f peer_ino 0 family1 type1
 state 10 name 
 (00.005913) sk unix:  Dumped: id 0x6 ino 0xb17c7f peer 0 type 1 state 10 name 
 20
 bytes
 (00.005916) fdinfo: type: 0x 5 flags: 02004002/01 pos: 0x   0 fd: 7
 (00.005924) 13939 fdinfo 10: pos: 0x   0 flags:   12/0
 (00.005927) Error (files-ext.c:91): Can't dump file 10 of that type [60660]
 (unknown (null))

Looks like this is some sort of fd that has a block device open that
criu doesn't support. If you can figure out which block device it is
(or tell me how to reproduce), perhaps we can teach criu easily to
checkpoint it.

Also, thanks for your other mail for a reproducer. I don't have time
the next to days to look at it, but I'll play around with it next
week.

Tycho

 (00.005933) 
 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-12 Thread Tycho Andersen
On Wed, Aug 12, 2015 at 08:54:22AM -0600, Tycho Andersen wrote:
 Not quite, can you show /proc/container init pid/mountinfo? I think
 this should be autodetected by criu the same way the other mounts are,
 but apparently it is not.

Seeing the host's /proc/self/mountinfo would be useful too, thanks.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-12 Thread Tycho Andersen
Hi Dietmar,

On Wed, Aug 12, 2015 at 08:22:14AM +0200, Dietmar Maurer wrote:
   What container images from https://images.linuxcontainers.org/ are 
   known to work with CRIU?
  
  I've tested with all the Ubuntu releases (not including wily, which
  isn't tagged yet), but not anything else. There are a few known bugs
  related to lxcfs that I haven't had time to track down, but if you
  e.g. use trusty without lxcfs I'm not aware of any bugs at the moment.
  I'm sure they exist, though, so feedback is definitely appreciated!
 
 I still use lxcfs for my tests, and I get the same errors 
 with ubuntu 15.04 (vivid):
 
 ...
 (00.124385) fsnotify: wd: wd 0x0002 s_dev 0x0070 i_ino 0x
1823 mask 0x0800afce
 (00.124387) fsnotify: [fhandle] bytes 0x0008 type 0x0001 
 __handle
 0x186df3261823:0x
 (00.124388) fsnotify: Opening fhandle 70:186df3261823...
 (00.124392) Path `/' resolved to `./' mountpoint
 (00.124397) fsnotify: Handle 0x70:0x1823 is openable
 (00.124399) Warn  (fsnotify.c:188): fsnotify: Handle 0x70:0x1823 
 cannot be
 opened
 (00.124400) irmap: Resolving 700:1823 path
 (00.124407) irmap: Scanning /etc hint
 (00.124423) irmap: Scanning /var/spool hint
 (00.124426) irmap: Scanning /lib/udev hint
 (00.124427) irmap: Scanning /. hint
 (00.124428) irmap: Scanning /no-such-path hint
 (00.124429) irmap: Refresh stat for /no-such-path
 (00.124436) Error (irmap.c:81): irmap: Can't stat /no-such-path: No such file 
 or
 directory
 (00.124438) Error (fsnotify.c:191): fsnotify: Can't dump that handle
 (00.124445) 
 (00.124446) Error (cr-dump.c:1255): Dump files (pid: 3220) failed with -1

Hmm, I've not seen this on vivid. Can you describe how to reproduce it
and I'll take a look?

 with ubuntu 14.04 I get this:
 
 ...
 (00.003608) autodetected external mount /sys/fs/pstore/ for ./sys/fs/pstore
 (00.003610) autodetected external mount /sys/kernel/security/ for
 ./sys/kernel/security
 (00.003611) autodetected external mount /sys/kernel/debug/ for
 ./sys/kernel/debug
 (00.003613) autodetected external mount /sys/fs/fuse/connections/ for
 ./sys/fs/fuse/connections
 (00.003620) Error (mount.c:680): FS mnt ./sys/kernel/debug/tracing dev 0x9 
 root
 / unsupported id 130
 ...
 (00.003719) Error (cr-dump.c:1618): Dumping FAILED.
 
 (using latest ubuntu 4.1 kernel)
 
 I guess this is related to lxcfs?

Not quite, can you show /proc/container init pid/mountinfo? I think
this should be autodetected by criu the same way the other mounts are,
but apparently it is not.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-12 Thread Tycho Andersen
On Wed, Aug 12, 2015 at 07:42:10PM +0200, Dietmar Maurer wrote:
  On Wed, Aug 12, 2015 at 06:41:55PM +0200, Dietmar Maurer wrote:
..and it's not bind mounted from the host, which is why it's not being
autodetected as a bind mount. When I start both trusty and wily
containers I don't see anything mounting tracefs, do you know what is
mounting it?
   
   No sorry, I have no idea currently. But I will try to find out ...
 
 Just found this info: http://permalink.gmane.org/gmane.linux.kernel/1930352
 
 Now when mounting /sys/kernel/debug, tracefs is automatically mounted
 in /sys/kernel/debug/tracing
 
 Would that explain the behavior?

Yep, I think that's it. Does the patch I sent work for you?

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-12 Thread Tycho Andersen
On Wed, Aug 12, 2015 at 06:41:55PM +0200, Dietmar Maurer wrote:
  ..and it's not bind mounted from the host, which is why it's not being
  autodetected as a bind mount. When I start both trusty and wily
  containers I don't see anything mounting tracefs, do you know what is
  mounting it?
 
 No sorry, I have no idea currently. But I will try to find out ...

Thanks. Are you on a wily host or a vivid host? Just reading up on
tracefs now, I think it shouldn't be too hard to patch criu to handle
it.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-12 Thread Tycho Andersen
Hi Dietmar,

On Wed, Aug 12, 2015 at 05:27:03PM +0200, Dietmar Maurer wrote:
 Here is the requested info:
 
 # cat /proc/22373/mountinfo
 ...
 130 225 0:9 / /sys/kernel/debug/tracing rw,relatime - tracefs tracefs rw

Looks like the container has tracefs, which CRIU doesn't understand
natively,

 # cat /proc/self/mountinfo
 19 24 0:18 / /sys rw,nosuid,nodev,noexec,relatime shared:7 - sysfs sysfs rw
 20 24 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:12 - proc proc rw
 21 24 0:6 / /dev rw,relatime shared:2 - devtmpfs udev
 rw,size=10240k,nr_inodes=4109801,mode=755
 22 21 0:14 / /dev/pts rw,nosuid,noexec,relatime shared:3 - devpts devpts
 rw,gid=5,mode=620,ptmxmode=000
 23 24 0:19 / /run rw,nosuid,relatime shared:5 - tmpfs tmpfs
 rw,size=6581364k,mode=755
 24 0 8:33 / / rw,relatime shared:1 - ext4 /dev/sdc1
 rw,errors=remount-ro,data=ordered
 25 19 0:12 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 -
 securityfs securityfs rw
 26 21 0:20 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw
 27 23 0:21 / /run/lock rw,nosuid,nodev,noexec,relatime shared:6 - tmpfs tmpfs
 rw,size=5120k
 28 19 0:22 / /sys/fs/cgroup rw shared:9 - tmpfs tmpfs rw,mode=755
 29 28 0:23 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:10 
 -
 cgroup cgroup
 rw,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
 30 19 0:24 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:11 - pstore
 pstore rw
 31 28 0:25 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:13 -
 cgroup cgroup rw,cpuset,clone_children
 32 28 0:26 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime
 shared:14 - cgroup cgroup rw,cpu,cpuacct
 33 28 0:27 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:15 -
 cgroup cgroup rw,blkio
 34 28 0:28 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:16 -
 cgroup cgroup rw,memory
 35 28 0:29 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:17 
 -
 cgroup cgroup rw,devices
 36 28 0:30 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:18 
 -
 cgroup cgroup rw,freezer
 37 28 0:31 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime
 shared:19 - cgroup cgroup rw,net_cls,net_prio
 38 28 0:32 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime 
 shared:20
 - cgroup cgroup
 rw,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event
 39 28 0:33 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:21 
 -
 cgroup cgroup
 rw,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb
 40 20 0:34 / /proc/sys/fs/binfmt_misc rw,relatime shared:22 - autofs systemd-1
 rw,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
 41 21 0:17 / /dev/mqueue rw,relatime shared:23 - mqueue mqueue rw
 43 19 0:7 / /sys/kernel/debug rw,relatime shared:24 - debugfs debugfs rw
 42 21 0:35 / /dev/hugepages rw,relatime shared:25 - hugetlbfs hugetlbfs rw
 44 19 0:36 / /sys/fs/fuse/connections rw,relatime shared:26 - fusectl fusectl 
 rw
 103 23 0:45 / /run/rpc_pipefs rw,relatime shared:83 - rpc_pipefs rpc_pipefs rw
 107 23 0:47 / /run/cgmanager/fs rw,relatime shared:87 - tmpfs cgmfs
 rw,size=100k,mode=755
 111 24 0:49 / /var/lib/lxcfs rw,nosuid,nodev,relatime shared:89 - fuse.lxcfs
 lxcfs rw,user_id=0,group_id=0,allow_other
 113 23 0:50 / /run/user/0 rw,nosuid,nodev,relatime shared:91 - tmpfs tmpfs
 rw,size=3290684k,mode=700

..and it's not bind mounted from the host, which is why it's not being
autodetected as a bind mount. When I start both trusty and wily
containers I don't see anything mounting tracefs, do you know what is
mounting it?

Unmounting it should allow you to c/r. We could add c/r to CRIU for
tracefs, although I've never used it, so I'll have to read about it a
bit.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-12 Thread Tycho Andersen
On Wed, Aug 12, 2015 at 11:06:02AM -0600, Tycho Andersen wrote:
 On Wed, Aug 12, 2015 at 06:41:55PM +0200, Dietmar Maurer wrote:
   ..and it's not bind mounted from the host, which is why it's not being
   autodetected as a bind mount. When I start both trusty and wily
   containers I don't see anything mounting tracefs, do you know what is
   mounting it?
  
  No sorry, I have no idea currently. But I will try to find out ...
 
 Thanks. Are you on a wily host or a vivid host? Just reading up on
 tracefs now, I think it shouldn't be too hard to patch criu to handle
 it.

Ok, based on a quick read I think all we need to restore are the fs
opts. The attach patch should do that; can you try it? I'd still like
to know what's mounting tracefs, but I suspect it is some systemd
thing :)

Tycho
From 6b2a672801950d972474a86fc29d05f9e6ad2fd6 Mon Sep 17 00:00:00 2001
From: Tycho Andersen tycho.ander...@canonical.com
Date: Wed, 12 Aug 2015 11:17:12 -0600
Subject: [PATCH] c/r: enable tracefs

tracefs is a new filesystem that can be mounted by users. Only the options
and fs name need to be passed to restore the state, so we can use criu's
auto fs feature.

Signed-off-by: Tycho Andersen tycho.ander...@canonical.com
---
 src/lxc/criu.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/lxc/criu.c b/src/lxc/criu.c
index e939b37..bd6ecac 100644
--- a/src/lxc/criu.c
+++ b/src/lxc/criu.c
@@ -49,7 +49,7 @@ lxc_log_define(lxc_criu, lxc);
 void exec_criu(struct criu_opts *opts)
 {
 	char **argv, log[PATH_MAX];
-	int static_args = 20, argc = 0, i, ret;
+	int static_args = 22, argc = 0, i, ret;
 	int netnr = 0;
 	struct lxc_list *it;
 
@@ -60,7 +60,7 @@ void exec_criu(struct criu_opts *opts)
 	 * --manage-cgroups action-script foo.sh -D $(directory) \
 	 * -o $(directory)/$(action).log --ext-mount-map auto
 	 * --enable-external-sharing --enable-external-masters
-	 * --enable-fs hugetlbfs
+	 * --enable-fs hugetlbfs --enable-fs tracefs
 	 * +1 for final NULL */
 
 	if (strcmp(opts-action, dump) == 0) {
@@ -122,6 +122,8 @@ void exec_criu(struct criu_opts *opts)
 	DECLARE_ARG(--enable-external-masters);
 	DECLARE_ARG(--enable-fs);
 	DECLARE_ARG(hugetlbfs);
+	DECLARE_ARG(--enable-fs);
+	DECLARE_ARG(tracefs);
 	DECLARE_ARG(-D);
 	DECLARE_ARG(opts-directory);
 	DECLARE_ARG(-o);
-- 
2.1.4

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-11 Thread Tycho Andersen
On Tue, Aug 11, 2015 at 09:04:46AM +0200, Dietmar Maurer wrote:
 I no get another error:
 
 (00.000399) Error (proc_parse.c:826): SECCOMP_MODE_FILTER not currently
 supported
 (00.000401) Error (proc_parse.c:839): Error parsing proc status file
 
 So I have to set:
 
 lxc.seccomp =

Yep, I'm working on adding SECCOMP_MODE_FILTER support right now.

 which is bad, because 'umount -f' can be used to terminate lxcfs...

Yes, and since these have to be privileged containers anyway I don't
think they're very secure. But hopefully we'll fix that.

 After that, I get:
 
 ...
 00.013118) timerfd: Dumping id 0x13 clockid 1 it_value(86392, 143546305)
 it_interval(0, 0)
 (00.013122) fdinfo: type: 0x11 flags: 02004002/01 pos: 0x   0 fd: 24
 (00.013131) 24683 fdinfo 25: pos: 0x   0 flags:  
 2004000/0x1
 (00.013143) fsnotify: wd: wd 0x0003 s_dev 0x0070 i_ino 0x
 517 mask 0x0800ad84
 (00.013145) fsnotify:   [fhandle] bytes 0x0008 type 0x0001 __handle
 0x004f3d2c0517:0x0
 000
 (00.013147) fsnotify: Opening fhandle 70:4f3d2c0517...
 (00.013150) Path `/' resolved to `./' mountpoint
 (00.013153) fsnotify:   Handle 0x70:0x517 is openable
 (00.013154) Warn  (fsnotify.c:188): fsnotify:   Handle 0x70:0x517 cannot 
 be
 opened
 (00.013156) irmap: Resolving 700:517 path
 (00.013157) irmap: Scanning /etc hint
 (00.013162) irmap: Scanning /var/spool hint
 (00.013163) irmap: Scanning /lib/udev hint
 (00.013165) irmap: Scanning /. hint
 (00.013166) irmap: Scanning /no-such-path hint
 (00.013167) irmap: Refresh stat for /no-such-path
 (00.013174) Error (irmap.c:81): irmap: Can't stat /no-such-path: No such file 
 or
 directory
 (00.013176) Error (fsnotify.c:191): fsnotify:   Can't dump that handle
 (00.013183) 
 (00.013185) Error (cr-dump.c:1255): Dump files (pid: 24683) failed with -1
 
 This is a simple centos7 container - nothing running (onyl systemd).
 Any idea whats wrong?

Not sure, this is likely something that systemd is doing to confuse
CRIU. The CRIU list may have a better idea, I've not seen it before.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-11 Thread Tycho Andersen
On Tue, Aug 11, 2015 at 05:30:12PM +0200, Dietmar Maurer wrote:
   This is a simple centos7 container - nothing running (onyl systemd).
   Any idea whats wrong?
  
  Not sure, this is likely something that systemd is doing to confuse
  CRIU. The CRIU list may have a better idea, I've not seen it before.
 
 What container images from https://images.linuxcontainers.org/ are 
 known to work with CRIU?

I've tested with all the Ubuntu releases (not including wily, which
isn't tagged yet), but not anything else. There are a few known bugs
related to lxcfs that I haven't had time to track down, but if you
e.g. use trusty without lxcfs I'm not aware of any bugs at the moment.
I'm sure they exist, though, so feedback is definitely appreciated!

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] CRIU with lxc.network.type empty fails

2015-08-10 Thread Tycho Andersen
Hi Dietmar,

On Sun, Aug 09, 2015 at 12:35:03PM +0200, Dietmar Maurer wrote:
 Hi all,
 
 I am testing criu with latest lxc/criu code from git, and the
 ubuntu 4.1 kernel. I use the following lxc config:
 
 # cat /var/lib/lxc/500/config
 lxc.arch = amd64
 lxc.include = /usr/share/lxc/config/centos.common.conf
 lxc.tty = 0
 lxc.utsname = CT500
 lxc.cgroup.memory.limit_in_bytes = 536870912
 lxc.cgroup.memory.memsw.limit_in_bytes = 1073741824
 lxc.cgroup.cpu.shares = 1024
 lxc.rootfs = loop:/var/lib/vz/images/500/vm-500-rootfs.raw
 lxc.console = none
 lxc.cgroup.devices.deny = c 5:1 rwm
 lxc.network.type = empty
 
 but criu fails:
 
 # strace lxc-checkpoint -s -v -D /tmp/checkpoint -n 500
 
 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
 child_tidptr=0x7fcf28886a50) = 1591
 close(4)= 0
 wait4(1591, [{WIFEXITED(s)  WEXITSTATUS(s) == 0}], 0, NULL) = 1591
 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=1591, si_uid=0,
 si_status=0, si_utime=0, si_stime=0} ---
 fcntl(3, F_GETFL)   = 0 (flags O_RDONLY)
 fstat(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
 0x7fcf288a3000
 lseek(3, 0, SEEK_CUR)   = -1 ESPIPE (Illegal seek)
 read(3, Version: 1.6\nGitID: v1.6-175-g4a..., 4096) = 38
 close(3)= 0
 munmap(0x7fcf288a3000, 4096)= 0
 geteuid()   = 0
 write(2, lxc-checkpoint: , 16lxc-checkpoint: )= 16
 write(2, criu.c: criu_ok: 308 , 21criu.c: criu_ok: 308 )   = 21
 write(2, Found network that is not VETH o..., 39Found network that is not 
 VETH
 or NONE
 ) = 39
 write(2, \n, 1
 )   = 1
 write(2, Checkpointing 500 failed.\n, 26Checkpointing 500 failed.
 ) = 26
 exit_group(1)   = ?
 
 
 Is that expected?

No, it's just an oversight. Unfortunately it's not as simple as adding
empty networks to the whitelist for LXC, but I'll work on a patch and
try and send it today.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc move - error: checkpoint failed

2015-08-06 Thread Tycho Andersen
Hi Tomasz,

On Wed, Aug 05, 2015 at 06:46:02PM +0900, Tomasz Chmielewski wrote:
 Trying to move a running container between hosts, unfortunately, it fails:
 
 # lxc move dp02:nominatim dp03:nominatim
 error: checkpoint failed
 
 Both hosts are running Ubuntu 14.04, package versions are:
 
 ii  criu1.6-2~ubuntu14.04.1~ppa1
 amd64checkpoint and restore in userspace
 ii  liblxc1 1.1.2-0ubuntu5~ubuntu14.04.1~ppa1
 amd64Linux Containers userspace tools (library)
 ii  lxc 1.1.2-0ubuntu5~ubuntu14.04.1~ppa1
 amd64Linux Containers userspace tools
 ii  lxc-templates   1.1.2-0ubuntu5~ubuntu14.04.1~ppa1
 amd64Linux Containers userspace tools (templates)
 ii  lxcfs   0.9-0ubuntu1~ubuntu14.04.1~ppa1
 amd64FUSE based filesystem for LXC
 ii  lxd 0.14-0ubuntu3~ubuntu14.04.1~ppa1
 amd64Container hypervisor based on LXC - daemon
 ii  lxd-client  0.14-0ubuntu3~ubuntu14.04.1~ppa1
 amd64Container hypervisor based on LXC - client
 ii  python3-lxc 1.1.2-0ubuntu5~ubuntu14.04.1~ppa1
 amd64Linux Containers userspace tools (Python 3.x bindings)
 
 
 Kernel is 3.13.0-55-generic #94-Ubuntu.
 
 
 Log entries created on dp02:
 
   lxc_container 1438767790.340 WARN lxc_log - log.c:lxc_log_init:316 -
 lxc_log_init called with log already initialized
   lxc_container 1438767790.340 WARN lxc_confile -
 confile.c:config_pivotdir:1768 - lxc.pivotdir is ignored.  It will soon
 become an error.
   lxc_container 1438767790.342 INFO lxc_confile -
 confile.c:config_idmap:1376 - read uid map: type u nsid 0 hostid 10
 range 65536
   lxc_container 1438767790.342 INFO lxc_confile -
 confile.c:config_idmap:1376 - read uid map: type g nsid 0 hostid 10
 range 65536
   lxc_container 1438767790.343 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.344 DEBUGlxc_commands -
 commands.c:lxc_cmd_get_state:574 - 'nominatim' is in 'RUNNING' state
   lxc_container 1438767790.344 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.344 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.344 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.345 DEBUGlxc_commands -
 commands.c:lxc_cmd_get_state:574 - 'nominatim' is in 'RUNNING' state
   lxc_container 1438767790.345 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.346 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.347 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.348 DEBUGlxc_commands -
 commands.c:lxc_cmd_get_state:574 - 'nominatim' is in 'RUNNING' state
   lxc_container 1438767790.348 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.348 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.349 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.349 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.350 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.350 DEBUGlxc_commands -
 commands.c:lxc_cmd_get_state:574 - 'nominatim' is in 'RUNNING' state
   lxc_container 1438767790.350 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767790.351 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767791.333 WARN lxc_log - log.c:lxc_log_init:316 -
 lxc_log_init called with log already initialized
   lxc_container 1438767791.333 WARN lxc_confile -
 confile.c:config_pivotdir:1768 - lxc.pivotdir is ignored.  It will soon
 become an error.
   lxc_container 1438767791.334 INFO lxc_confile -
 confile.c:config_idmap:1376 - read uid map: type u nsid 0 hostid 10
 range 65536
   lxc_container 1438767791.334 INFO lxc_confile -
 confile.c:config_idmap:1376 - read uid map: type g nsid 0 hostid 10
 range 65536
   lxc_container 1438767791.335 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767791.336 DEBUGlxc_commands -
 commands.c:lxc_cmd_handler:888 - peer has disconnected
   lxc_container 1438767791.336 DEBUGlxc_commands -
 commands.c:lxc_cmd_get_state:574 - 'nominatim' is in 'RUNNING' state
   lxc_container 1438767798.810 DEBUGlxc_commands -
 

Re: [lxc-users] Problem Launching new containers

2015-08-05 Thread Tycho Andersen
Hi Matthew,

On Wed, Aug 05, 2015 at 11:40:12AM +0100, Matthew Williams wrote:
 Hey Folks,
 
 I just installed lxd using the instructions here:
 https://linuxcontainers.org/lxd/getting-started-cli/
 
 But I can't launch new containers. I get the following:
 
 http://paste.ubuntu.com/12006031/
 
 What should I do to help debug/ fix this?

It looks like it's failing to save the config file, which is
(attempting) to be created by go's ioutil.TempFile(); do you have a
/tmp that is writable by the user LXD is running as?

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Problem Launching new containers

2015-08-05 Thread Tycho Andersen
On Wed, Aug 05, 2015 at 03:06:04PM +0100, Matthew Williams wrote:
 Hi Tycho,
 
 It's probably doing something like this then
 http://play.golang.org/p/mybQSejDo8.
 
 That works when I run it on the same machine. Would lxd be running this
 command as the lxd user though is that right?

Right.

 The user I was running this command as was in the lxd group though

Yep, but the lxd user may not have permission to write to /tmp
(although if you're on a stock ubuntu system, this seems unlikely).

One thing that might be helpful is an strace of lxd when it's failing
this. Unfortunately it's not very easy to get errors out of this part
of the code, so an strace might be our best bet. The most obvious
thing that could cause problems is the perms issue, but looking
through the code there may be a few others.

Thanks,

Tycho

 Matty
 
 On Wed, Aug 5, 2015 at 2:25 PM, Tycho Andersen tycho.ander...@canonical.com
  wrote:
 
  Hi Matthew,
 
  On Wed, Aug 05, 2015 at 11:40:12AM +0100, Matthew Williams wrote:
   Hey Folks,
  
   I just installed lxd using the instructions here:
   https://linuxcontainers.org/lxd/getting-started-cli/
  
   But I can't launch new containers. I get the following:
  
   http://paste.ubuntu.com/12006031/
  
   What should I do to help debug/ fix this?
 
  It looks like it's failing to save the config file, which is
  (attempting) to be created by go's ioutil.TempFile(); do you have a
  /tmp that is writable by the user LXD is running as?
 
  Tycho
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD + OS X

2015-07-31 Thread Tycho Andersen
On Thu, Jul 30, 2015 at 03:27:44PM -0700, Kevin LaTona wrote:
 Thanks Bill, that got me closer but it still is not working.
 
 
 When I do a make I get back
 
 
 lxd kevin$ make
 go get -v -d ./...
 go install -v ./...
 github.com/lxc/lxd/lxd/migration
 # github.com/lxc/lxd/lxd/migration
 lxd/migration/migrate.go:38: undefined: lxc.Container
 make: *** [default] Error 2
 
 
 If anyone is running the client on OS X and has it working…… any insights 
 would be helpful.

Here you're trying to build the daemon; don't do that :). Try,

go install ./lxc

Tycho

 
 Thanks
 -Kevin
 
 
 
 On Jul 30, 2015, at 12:57 PM, Bill Anderson bill.ander...@rackspace.com 
 wrote:
 
  
  On Jul 30, 2015, at 2:23 PM, Kevin LaTona li...@studiosola.com wrote:
  
  
  Looking for any GO people on the list who might be able to help me 
  dechiper what this error means which trying to install LXD CLI on to a OS 
  X machine.
  
  If I check my current GO path on OS X it's at /usr/local/go
  
  Which is where GO installed it at.
  
  
  I got the current LXD tar ball
  CD to it's top folder and called make
  
  From here it give GO path not found error…….through out the whole make 
  script
  
  lxd-0.14 kevin$ make
  go get -v -d ./...
  package github.com/chai2010/gettext-go/gettext: cannot download, $GOPATH 
  not set.
  
  You need to set your GOPATH environment variable. This is where it will put 
  the repo which ‘go get’ will get. Personally, I use $HOME/.go but it can be 
  wherever you want it to be. See https://github.com/golang/go/wiki/GOPATH 
  and/or http://www.ryanday.net/2012/10/01/installing-go-and-gopath/  for 
  more details.
 

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] tap interface in unprivileged container?

2015-07-21 Thread Tycho Andersen
On Tue, Jul 21, 2015 at 09:43:44PM +0200, Dirk Geschke wrote:
 Hi LXC-Users,
 
  is there an easy way to create/move a tap interface to an unprivileged
  container?
 
 I think, I found a solution:
 
# ip tuntap add mode tap tap0
# ip link set tap0 netns 16077
 
 This creates a tap interface with name tap0, 16077 is the PID of
 the init process in the container. If the container is started
 with this config line
 
   lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
 
 it seems to work. At least, I get a tap0 interface and get no
 further errors, so far...
 
 Is there an easy way to find the PID of the init in the container
 or something else to move the interface to the correct container?

You can get the init pid with $(lxc-info -n $container -H -p).

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD REST API file upload error

2015-07-15 Thread Tycho Andersen
On Wed, Jul 15, 2015 at 08:53:32AM -0600, Tycho Andersen wrote:
 On Wed, Jul 15, 2015 at 05:36:27PM +0300, Jony Opipo wrote:
  Hi
  
  I am trying to upload file to container using REST API and I get the
  following error:
  
  curl -s -k -X POST -d @test --cert client.crt --key client.key
  https://192.168.10.155:8443/1.0/containers/test/files?path=/tmp/test
  
  015/07/15 17:17:31 handling POST /1.0/containers/test/files?path=/tmp/test
  2015/07/15 17:17:31
  {
  error: strconv.ParseInt: parsing \\: invalid syntax,
  error_code: 400,
  type: error
  }
  
  
  Does anyone know if I have syntax error or is this a bug?
 
 Looks like a bug. You have to set the gid/uid/mode headers as
 specified here:
 
 https://github.com/lxc/lxd/blob/master/specs/rest-api.md#post-2
 
 although the spec says may, the code seems to require them right now.

I just filed https://github.com/lxc/lxd/issues/853 so we can track it,
thanks!

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Find out if one is inside a container

2015-07-14 Thread Tycho Andersen
On Mon, Jul 13, 2015 at 10:53:10AM +0200, Christoph Mathys wrote:
 Is there an unhacky way of knowing if a script runs inside a
 container? In my case, a sysV initscript that tries to load some
 kernel modules needs to know if it runs inside the container, because
 it must not load the modules in that case.
 
 The hacky way I came across so far was checking the control group of PID 1.
 
 Checking /proc/1/environ is considered OK? Or are there better ways?

root@precise:/# cat /var/run/container_type
lxc

may be a better option for you.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Moving/copying containers to a different machine with LXC

2015-07-14 Thread Tycho Andersen
On Thu, Jul 09, 2015 at 01:24:50PM +0200, Valerio Mariani wrote:
 Dear All,
 
please forgive me if my question is too basic, but I am relatively
 new to the world of lxc containers.
 
 With LXD/LXC I downloaded a centos6 amd64 image on my laptop, I made a
 container out of it, installing additional software within the
 container. All works perfectly. I would like now to use this container
 also on my desktop, where I am also running LXD/LXC, without spending
 time to create it again.
 
 Initially I looked for an export/import command for containers, but
 could not find any. I then thought of  copying the container between lxd
 instances using 'lxc copy'.
 
 I added my desktop lxd server to my laptop remotes with 'lxc remote
 add', then I added a localhost remote with 'lxc remote add localhost
 localhost'
 
 I finally tried to launch 'lxc copy localhost:centos6-psana
 IntelNUC:centos6-psana' where IntelNUC is my desktop and centos6-psana
 is the container I am trying to copy.
 However, I got an error, saying:
 error: websocket: bad handshake

You're probably hitting:

https://github.com/lxc/lxd/issues/598

if you re-add the remote as 'http://$the_ip' instead of
'http://localhost' it should work.

Tycho

 Both lxd instances are version 0.13 and they both run on Ubuntu 15.04
 systems kept up to date, and on both of them I am using the lxd stable ppa.
 
 Launching the command with the debug option gave me the following output:
 
 2015/07/08 00:54:51 fingering the daemon
 2015/07/08 00:54:51 raw response:
 {type:sync,status:Success,status_code:200,metadata:{api_compat:1,auth:trusted,config:{core.trust_password:true},environment:{backing_fs:btrfs,driver:lxc,kernel_version:3.19.0-21-generic,lxc_version:1.1.2,lxd_version:0.12}}}
 2015/07/08 00:54:51 pong received
 2015/07/08 00:54:51 raw response:
 {type:sync,status:Success,status_code:200,metadata:{name:centos6-psana,profiles:[default],config:{volatile.baseImage:afae698680fcf11b915cc3b6f06b3af6c085a59efa4b368b80ddcf9d66b73e3d,volatile.eth0.hwaddr:00:16:3e:a9:ae:26},expanded_config:{volatile.baseImage:afae698680fcf11b915cc3b6f06b3af6c085a59efa4b368b80ddcf9d66b73e3d,volatile.eth0.hwaddr:00:16:3e:a9:ae:26},userdata:,status:{status:STOPPED,status_code:1,init:0,ips:null},devices:{},expanded_devices:{eth0:{hwaddr:00:16:3e:a9:ae:26,nictype:bridged,parent:lxcbr0,type:nic}},ephemeral:false,log:}}
 2015/07/08 00:54:53 fingering the daemon
 2015/07/08 00:54:53 raw response:
 {type:sync,status:Success,status_code:200,metadata:{api_compat:1,auth:trusted,config:{core.trust_password:true},environment:{backing_fs:btrfs,driver:lxc,kernel_version:3.19.0-21-generic,lxc_version:1.1.2,lxd_version:0.12}}}
 2015/07/08 00:54:53 pong received
 2015/07/08 00:54:53 raw response:
 {type:sync,status:Success,status_code:200,metadata:[/1.0/profiles/default]}
 2015/07/08 00:54:53 posting {migration:true}
  to https://localhost:8443/1.0/containers/centos6-psana
 2015/07/08 00:54:53 raw response:
 {type:async,status:OK,status_code:100,operation:/1.0/operations/a2fa7579-f692-491f-8570-b0d47b510ffb,resources:null,metadata:{control:5e2c8e9ef3bc64d665e660fc48b0b381e3dc929f073336cc6a4e77a6ad8734e7,fs:a703f1fe7570d30dd2da486370a42bc61fc7f47e40b4a4f93c4f2e69ab01ba2d}}
 2015/07/08 00:54:53 posting
 {config:{},name:centos6-psana,profiles:[default],source:{base-image:afae698680fcf11b915cc3b6f06b3af6c085a59efa4b368b80ddcf9d66b73e3d,mode:pull,operation:wss://localhost:8443/1.0/operations/a2fa7579-f692-491f-8570-b0d47b510ffb/websocket,secrets:{control:5e2c8e9ef3bc64d665e660fc48b0b381e3dc929f073336cc6a4e77a6ad8734e7,fs:a703f1fe7570d30dd2da486370a42bc61fc7f47e40b4a4f93c4f2e69ab01ba2d},type:migration}}
  to https://192.168.1.101:8443/1.0/containers
 2015/07/08 00:55:03 raw response:
 {type:async,status:OK,status_code:100,operation:/1.0/operations/45c6a173-457e-4e04-a8d5-eea6a3324995,resources:{containers:[/1.0/containers/centos6-psana]},metadata:null}
 2015/07/08 00:55:03 1.0/operations/45c6a173-457e-4e04-a8d5-eea6a3324995/wait
 2015/07/08 00:55:04 raw response:
 {type:sync,status:Success,status_code:200,metadata:{created_at:2015-07-08T00:55:03.272451527+02:00,updated_at:2015-07-08T00:55:04.147281945+02:00,status:Failure,status_code:400,resources:{containers:[/1.0/containers/centos6-psana]},metadata:websocket:
 bad handshake,may_cancel:false}}
 2015/07/08 00:55:04 Error caught in main: *errors.errorString
 error: websocket: bad handshake
 
 Am I doing something wrong? Or is 'lxc copy' the wrong way of making a
 copy of a container on a different machine?
 
 Thank you for your help
 
   Valerio
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-checkpoint of a cloned container

2015-06-16 Thread Tycho Andersen
On Sat, Jun 13, 2015 at 02:12:49PM +0100, Thouraya TH wrote:
 Hi all,
 
 Please, i'd like to have an idea about the difference between
 
 - lxc-checkpoint   container
 
  AND
 
 - lxc-checkpoint  clone-of-container

There should be no difference here.

 ?
 
  lxc-checkpoint of a cloned container, requires the original container
 is running
 or not ?

No, this is not required.

Tycho

 Thanks a lot.
 Best Regards.

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc copy error: websocket: bad handshake

2015-06-15 Thread Tycho Andersen
On Mon, Jun 15, 2015 at 04:18:36PM -0400, Wajih Ahmed wrote:
 $ lxc copy lxd:rp server1:rp
 error: websocket: bad handshake
 
 Both sides are version 0.11.
 
 How can i further troubleshoot?

Debug logs for both would be a start, although I'm not entirely sure
they'll help in this case. Are they both package builds, or builds you
did by hand?

Thanks,

Tycho

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] fetching application update out of container's space

2015-06-10 Thread Tycho Andersen
On Thu, Jun 11, 2015 at 12:07:21AM +0200, Genco Yilmaz wrote:
 On Wed, Jun 10, 2015 at 11:06 PM, Tycho Andersen 
 tycho.ander...@canonical.com wrote:
 
  On Wed, Jun 10, 2015 at 09:22:10PM +0200, Genco Yilmaz wrote:
   Hi,
  I have been playing with containers for a few days only and deployed
   several to test some networking features. I have searched on the net to
  get
   an answer but couldn't find any post/page yet. Issue is that I set up a
   small LAB
   containers of which has no internet access. If I need to install an
   application e.g apache2
   I attach to the container like;
  
   #lxc-attach -n container1
   container1#apt-get install apache2
  
   but for this to work, I add a veth peer to let the container access
  outside
   network (This isn't something I prefer to do as I need to isolate these
   containers)
 
  Why not just run an apt mirror on the isolated network?
 
   I wonder if there is any way to install this app from the master host i.e
   by using master
   host's network space but install the app on the container something like
   this imaginary command;
  
   #*lxc-run* -n container -c apt-get install apache2
  
   i.e pulling the application from repository on the master space but
  pushing
   it onto the container.
  
   There is lxc-execute, lxc-attach but they all run inside container's
  space
   which doesn't work for me.
 
  You might like the -s option to lxc-attach.
 
  Tycho
 
   Thanks,
  
   Genco.
 
 
 Hi Tycho,
  Thanks for the reply. I have tried this one now. Apparently I didn't
 notice this option:) but there seems to be an issue with name resolution.
 Not sure what I am doing wrong though but although I am not attaching to
 container's network namespace,
 system still checks the resolv.conf file inside the container instead of
 host's resolv.conf. As you can see,
 if I add the nameserver to container resolv.conf, name resolution works. Is
 this expected or there is a missing/incorrect option in my command?

This is expected, because you're using the container's mount
namespace, and thus the tools look at the container's
/etc/resolv.conf.

Tycho

 or is
 it because of the MOUNT namespace. Because of this name resolution issue,
 apt-get also fails
 
 
 
 root@vhost3:~# lsb_release -a
 No LSB modules are available.
 Distributor ID: Ubuntu
 Description:Ubuntu 14.04.2 LTS
 Release:14.04
 Codename:   trusty
 
 root@vhost3:~# ping archive.ubuntu.com -c 1
 PING archive.ubuntu.com (91.189.91.15) 56(84) bytes of data.
 64 bytes from likho.canonical.com (91.189.91.15): icmp_seq=1 ttl=51
 time=81.6 ms
 
 --- archive.ubuntu.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 81.607/81.607/81.607/0.000 ms
 
 root@vhost3:~# lxc-attach -n LAB1016-co -e -s 'UTSNAME|MOUNT|PID|IPC' --
 ping archive.ubuntu.com -c 1
 ping: unknown host archive.ubuntu.com
 
 root@vhost3:~# lxc-attach -n LAB1016-co -e -s 'UTSNAME|MOUNT|PID|IPC' --
 ping 91.189.91.15 -c 1
 PING 91.189.91.15 (91.189.91.15) 56(84) bytes of data.
 64 bytes from 91.189.91.15: icmp_seq=1 ttl=51 time=80.7 ms
 
 --- 91.189.91.15 ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 80.739/80.739/80.739/0.000 ms
 
 root@vhost3:~# cat /etc/resolv.conf
 # Dynamic resolv.conf(5) file for glibc resolver(3) generated by
 resolvconf(8)
 # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
 nameserver 8.8.8.8
 search example.com
 
 root@vhost3:~# lxc-attach -n LAB1016-co
 root@LAB1016-co:~# echo nameserver 8.8.8.8  /etc/resolv.conf
 root@LAB1016-co:~# exit
 exit
 root@vhost3:~# lxc-attach -n LAB1016-co -e -s 'UTSNAME|MOUNT|PID|IPC' --
 ping archive.ubuntu.com -c 1
 PING archive.ubuntu.com (91.189.91.14) 56(84) bytes of data.
 64 bytes from orobas.canonical.com (91.189.91.14): icmp_seq=1 ttl=52
 time=87.2 ms
 
 --- archive.ubuntu.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 87.250/87.250/87.250/0.000 ms

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] DHCP Status on Linux Containers Images

2015-06-08 Thread Tycho Andersen
On Mon, Jun 08, 2015 at 10:19:38AM -0600, Neil Jubinville wrote:
 Hi All,  so I discovered LXD yesterday, I am very happy I did.  I love what
 I see so far.
 
 I am testing it on a DellR420 , 192 GB ram and 32 cores.   ( looking
 forward to an LVM backing store)
 
 I am still new to containers and have primarily been building Openstack and
 using KVM so I have a few questions.
 
 Qualifier:  I am running  0.7  because the Master GIT PPA throws error:
 system exit at 1.  when launching.
 
 Question:
 
 I noticed that the certain images when downloaded from linuxcontainers
 remote do not seem to grab an IP when launched.   The one in the demo of
 getting started does but others like vivid do not.
 
 If I get a shell to the container I can : root@LXC_NAME:~# ifup eth0   and
 it will get an IP through DHCP
 
 Note below test2 I ran from a vivid 64 image.
 
 +---+-+---+--+---+
 |   NAME|  STATE  |   IPV4| IPV6 | EPHEMERAL |
 +---+-+---+--+---+
 | ubuntu-32 | RUNNING | 10.0.3.54 |  | NO|
 | test  | RUNNING | 10.0.3.64 |  | NO|
 | test2 | RUNNING |   |  | NO|
 | test3 | RUNNING | 10.0.3.73 |  | NO|
 | test4 | RUNNING | 10.0.3.77 |  | NO|
 | test5 | RUNNING | 10.0.3.83 |  | NO|
 +---+-+---+--+---+
 
 Any idea why ?   Is this an init bug on some of the images?

Yes,

https://github.com/lxc/lxd/issues/581

 Also, I would like today to switch to the TRUNK but may need some help
 troubleshooting since the logs were very sparse about the error.

Likely you're hitting,

https://github.com/lxc/lxd/issues/739#issuecomment-109997079

which should be fixed tomorrow when we tag 0.11.

Tycho

 thx!
 
 Neil

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tycho Andersen
Hi Tomasz,

On Fri, Jun 05, 2015 at 07:22:25PM +0900, Tomasz Chmielewski wrote:
 Is there a -B btrfs equivalent in lxd?

Yes, if you mount /var/lib/lxd as a btrfs subvolume, it should Just
Work.

 For example, with lxc, I would use:
 
 # lxc-create --template download --name test-container -B btrfs
 
-B backingstore
   'backingstore'  is  one  of  'dir',  'lvm', 'loop', 'btrfs',
 'zfs', or 'best'. The default is 'dir', meaning that the container root
 filesystem will be a directory under /var/lib/lxc/container/rootfs.
 
 
 How can I do the same with lxd (lxc command)? It seems to default to dir.

LVM support is Coming Soon, and making it fast and stable will likely
be a primary focus.

Tycho

 # lxc launch images:ubuntu/trusty/amd64 test-container
 
 
 -- 
 Tomasz Chmielewski
 http://wpkg.org
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tycho Andersen
On Sat, Jun 06, 2015 at 12:32:07AM +0900, Tomasz Chmielewski wrote:
 On 2015-06-06 00:19, Tycho Andersen wrote:
 
 # ls -l /var/lib/lxd
 lrwxrwxrwx 1 root root 8 Jun  5 10:15 /var/lib/lxd - /srv/lxd
 
 Ah, my best guess is that lxd doesn't follow the symlink correctly
 when detecting filesystems. Whatever the cause, if you file a bug
 we'll fix it, thanks.
 
 Can you point me to the bug filing system for linuxcontainers.org?

We use github for lxc/lxd, so https://github.com/lxc/lxd/issues/new

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tycho Andersen
On Sat, Jun 06, 2015 at 12:11:27AM +0900, Tomasz Chmielewski wrote:
 On 2015-06-06 00:00, Tycho Andersen wrote:
 
 As I've checked, this is not the case (the container is created in a
 directory, not in btrfs subvolume; lxc-create -B btrfs creates it in a
 subvolume).
 
 Can you file a bug with info to reproduce? It should work as of 0.8.
 
 Before I file a bug report - that's how it works for me - /var/lib/lxd/ is a
 symbolic link to /srv/lxd, placed on a btrfs filesystem:
 
 # ls -l /var/lib/lxd
 lrwxrwxrwx 1 root root 8 Jun  5 10:15 /var/lib/lxd - /srv/lxd

Ah, my best guess is that lxd doesn't follow the symlink correctly
when detecting filesystems. Whatever the cause, if you file a bug
we'll fix it, thanks.

 # mount|grep /srv
 /dev/sda4 on /srv type btrfs
 (rw,noatime,device=/dev/sda4,device=/dev/sdb4,compress=zlib)
 
 
 # lxc launch images:ubuntu/trusty/amd64 test-image
 Creating container...done
 Starting container...done
 error: exit status 1
 
 Note that it errored when trying to start the container - I have to add
 lxc.aa_allow_incomplete = 1; otherwise, it won't start (is there some
 /etc/lxc/default.conf equivalent for lxd, where this could be set?).

Yes, there is a default profile that is applied if you don't specify
one, you can edit it with:

lxc profile edit default

Tycho

 However, the container is already created in a directory, so I don't think
 the above error matters:
 
 # btrfs sub list /srv|grep lxd
 # btrfs sub list /srv|grep test-image
 #
 
 
 -- 
 Tomasz Chmielewski
 http://wpkg.org
 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] how to get veth interface(s) id in LXD?

2015-06-05 Thread Tycho Andersen
Hi Jonathan,

On Fri, Jun 05, 2015 at 11:23:05AM -0400, Gregoire, Jonathan (520851) wrote:
 Hi,
 
 Does anybody knows how to get the veth interface(s) linked to an LXD 
 container. I'm able to get it in LXC but not in LXD.

It turns out this isn't possible right now. However, it seems like a
reasonable thing to want to do, so I implemented it:

https://github.com/lxc/lxd/pull/738

Tycho

 In LXC:
 
 jonathan@lxd01:~$ sudo lxc-info -n container1
 Name:   container1
 State:  RUNNING
 PID:7160
 IP: 10.0.3.142
 CPU use:0.47 seconds
 BlkIO use:  4.00 KiB
 Memory use: 17.15 MiB
 KMem use:   0 bytes
 Link:   veth32PUI2
 TX bytes:  1.57 KiB
 RX bytes:  1.69 KiB
 Total bytes:   3.26 KiB
 
 
 Jonathan
 

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd: -B backingstore equivalent?

2015-06-05 Thread Tycho Andersen
On Fri, Jun 05, 2015 at 11:36:37PM +0900, Tomasz Chmielewski wrote:
 On 2015-06-05 22:58, Tycho Andersen wrote:
 Hi Tomasz,
 
 On Fri, Jun 05, 2015 at 07:22:25PM +0900, Tomasz Chmielewski wrote:
 Is there a -B btrfs equivalent in lxd?
 
 Yes, if you mount /var/lib/lxd as a btrfs subvolume, it should Just
 Work.
 
 As I've checked, this is not the case (the container is created in a
 directory, not in btrfs subvolume; lxc-create -B btrfs creates it in a
 subvolume).

Can you file a bug with info to reproduce? It should work as of 0.8.

Thanks,

Tycho

 lxd  0.9-0ubuntu2~ubuntu14.04.1~ppa1
 
 
 -- 
 Tomasz Chmielewski
 http://wpkg.org
 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the best way to report bug issues with LXD rest server?

2015-05-27 Thread Tycho Andersen
On Tue, May 26, 2015 at 10:12:37PM -0700, Kevin LaTona wrote:
 
 On May 26, 2015, at 4:37 PM, Tycho Andersen tycho.ander...@canonical.com 
 wrote:
 
  I just wrote http://tycho.ws/blog/2015/05/lxd-python.html which works
  fine for me on Ubuntu.
 
 In Tycho's blog post he was connecting to the LXD server locally.
 
 When one is logging in via a remote client to a LXD rest server what files 
 would be used by the remote client software for the SSL connection given this 
 is a self signed cert?

I'm not sure I understand the question. The procedure should be
exactly the same over any TCP connection, whether to localhost or not.

 -Kevin
 
 
 
 

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trying out migration, getting error: checkpoint failed

2015-05-27 Thread Tycho Andersen
On Wed, May 27, 2015 at 06:26:56PM +0100, Giles Thomas wrote:
 Hi Tycho,
 
 Sorry again for the slow turnaround!
 
 On 15/05/15 18:59, Tycho Andersen wrote:
 I suspect it still can't find criu and it just isn't finding the binary.
 Can you symlink it into /bin just to be sure?
 
 That didn't help.   However, installing criu from their github repo seems to
 have moved things on a bit.   Now, instead of getting error: checkpoint
 failed, I get error: restore failed.   Inside /var/log/lxd/migratee,
 there is a file called migration_dump_2015-05-27T17:18:45Z.log, about 536K
 long.  I've not attached it, as I figure that would be pretty annoying for
 everyone else on the list, but I can sent it directly to you if it would be
 useful.   There is also a 78K lxc.log.
 
 On the destination machine, there's also a /var/log/lxd/migratee/lxc.log,
 which is significantly shorter; here are the contents:
 
 lxc 1432744458.034 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type u nsid 0 hostid 10
 range 65536
 lxc 1432744458.034 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type g nsid 0 hostid 10
 range 65536
 lxc 1432744544.115 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type u nsid 0 hostid 10
 range 65536
 lxc 1432744544.115 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type g nsid 0 hostid 10
 range 65536
 lxc 1432744765.397 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type u nsid 0 hostid 10
 range 65536
 lxc 1432744765.398 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type g nsid 0 hostid 10
 range 65536
 lxc 1432747103.562 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type u nsid 0 hostid 10
 range 65536
 lxc 1432747103.563 INFO lxc_confile -
 confile.c:config_idmap:1390 - read uid map: type g nsid 0 hostid 10
 range 65536
 lxc 1432747128.877 ERRORlxc_criu - criu.c:criu_ok:333 -
 couldn't find devices.deny = c 5:1 rwm

Ah, this is a sanity check to make sure that various container config
properties are set. It looks like things aren't set on the destination
host correctly; I think there was a bug with this in the 0.9 client,
fixed by 6b5595d03dff7d360f05fa48ee6198d71e7f1ef4, so you may want to
upgrade to 0.10.

Tycho

 
 All the best,
 
 Giles
 
 -- 
 Giles Thomas gi...@pythonanywhere.com
 
 PythonAnywhere: Develop and host Python from your browser
 https://www.pythonanywhere.com/
 
 A product from PythonAnywhere LLP
 17a Clerkenwell Road, London EC1M 5RD, UK
 VAT No.: GB 893 5643 79
 Registered in England and Wales as company number OC378414.
 Registered address: 28 Ely Place, 3rd Floor, London EC1N 6TD, UK
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trying out migration, getting error: checkpoint failed

2015-05-27 Thread Tycho Andersen
On Wed, May 27, 2015 at 06:37:02PM +0100, Giles Thomas wrote:
 On 27/05/15 18:33, Tycho Andersen wrote:
 On Wed, May 27, 2015 at 06:26:56PM +0100, Giles Thomas wrote:
  lxc 1432747128.877 ERRORlxc_criu - criu.c:criu_ok:333 -
 couldn't find devices.deny = c 5:1 rwm
 Ah, this is a sanity check to make sure that various container config
 properties are set. It looks like things aren't set on the destination
 host correctly; I think there was a bug with this in the 0.9 client,
 fixed by 6b5595d03dff7d360f05fa48ee6198d71e7f1ef4, so you may want to
 upgrade to 0.10.
 
 Do I have to build from source for that?  I had to rebuild the machines to
 run the test, and used the ppa:ubuntu-lxc/lxd-git-master, but that's 0.9 and
 it looks like it's not been updated for a few weeks.

Yes, unfortunately we still don't have automated builds the ppas. I
think stgraber will be uploading 0.10 later today when the official
release announcement goes out, so you can just wait until then if you
want.

Tycho

 
 All the best,
 
 Giles
 
 -- 
 Giles Thomas gi...@pythonanywhere.com
 
 PythonAnywhere: Develop and host Python from your browser
 https://www.pythonanywhere.com/
 
 A product from PythonAnywhere LLP
 17a Clerkenwell Road, London EC1M 5RD, UK
 VAT No.: GB 893 5643 79
 Registered in England and Wales as company number OC378414.
 Registered address: 28 Ely Place, 3rd Floor, London EC1N 6TD, UK
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the best way to report bug issues with LXD rest server?

2015-05-26 Thread Tycho Andersen
On Tue, May 26, 2015 at 06:37:59PM -0700, Kevin LaTona wrote:
 
 On May 26, 2015, at 4:37 PM, Tycho Andersen tycho.ander...@canonical.com 
 wrote:
 
  Hi Kevin,
  
  On Mon, May 25, 2015 at 07:38:12PM -0700, Kevin LaTona wrote:
  
  On May 25, 2015, at 12:16 PM, Kevin LaTona li...@studiosola.com wrote:
  
  The simplest way I found so far to connect from a Mac running 10.8.5 to 
  the LDX 0.9 rest server is using a Python Subprocess call via SSH into 
  the host machine which runs a Curl call to the LXD server which then 
  returns the JSON/Dict object.
  
  While it sounds like a round about way to get there, it's the only way I 
  have found so far to bypass the surrounding issue of getting TLS1_2 to 
  run on OS X  10.8.5 and or Python 2.7.9.
  
  
  
  Well that was one really short lived idea. 
  
  Making those ssh based subprocess calls to the host is just not cutting it 
  from me after all, even if it does work the overhead cost to do them kind 
  of kills the idea for all but simple use.
  
  I was really wanting to stick by and use the LXD Rest server and not have 
  to re-invent the wheel here.
  
  
  Guess it's not going to happen, so instead I've decided to create a Python 
  based Tornado Rest server running on the host and calling the LXD Cli 
  calls.
  
  This way I can back the SSL library down from the TLS1_2 idea. I guess 
  some need that level of security, for now I can live without it.
  
  
  Plus Tornado opens up some other areas to look at doing some container 
  management like ideas.
  
  So this may turn out better over the long haul until LXD matures and 
  becomes a bit more solid.
  
  
  
  
  
  If there is any Python users on this list using the Requests module and 
  has it working with both TLS1_2 and the LXD rest server, please share 
  your process.
  
  
  Again if there is any Pythonista on this LXC mailing list who has been 
  able to get TLS1_2 wrapped and working with Requests.
  
  I just wrote http://tycho.ws/blog/2015/05/lxd-python.html which works
  fine for me on Ubuntu.
 
 
 Looks good should help folks with correct machine setups to see how easy it 
 can be.
 
 
 
  
  I do have an old OSX system laying around so I tried it there and got
  an SSL error. It looks like the version of SSL it has only has TLS 1.0
  built in. I don't really know anything about OSX, but the obvious
  solution seems to be to use the above program and a version of openssl
  that has TLS 1.2 compiled in. Perhaps upgrading OSX or using some
  package manager to give you an new libssl would work.
 
 
 It appears the big road block here right now is Apple's use of an outdated 
 OpenSSL library that makes using TSL1_2 impossible with out access to a newer 
 version of OpenSSL.
 
 Maybe that is possible with 10.10 or even 10.9, but right now I need to keep 
 this machine frozen at 10.8.5.
 
 
 The pylxd app mentioned in your blog looks interesting since it's using unix 
 domain sockets.
 
 If that ends up getting access to lxc calls without having to make ny kind of 
 a subprocess call to command line, it may turn out to be a tad bit faster 
 when interfacing with this Tornado rest server I am working on.
 
 
 It's pretty clear to me now that if anyone has any client that can not use 
 TSL1_2 that the only way to efficient access a LXD server will be by running 
 their own server on the host as well.
 
 Or totally bypassing LXD and go back to using legacy LXC calls.
 
 
 If there is any Mac users on the list that know of a way that allows OS X 
 10.8.5 and Python 2.7.10 to use newer versions of OpenSSL,  let me now how 
 you did it, if you care to share.
 
 
 Tycho ….thanks for looking into this and sharing what you found out.

Another option would be to use socat:

https://github.com/raharper/lxd_tools/blob/master/setup.sh#L19
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the best way to report bug issues with LXD rest server?

2015-05-26 Thread Tycho Andersen
Hi Kevin,

On Mon, May 25, 2015 at 07:38:12PM -0700, Kevin LaTona wrote:
 
 On May 25, 2015, at 12:16 PM, Kevin LaTona li...@studiosola.com wrote:
 
  The simplest way I found so far to connect from a Mac running 10.8.5 to the 
  LDX 0.9 rest server is using a Python Subprocess call via SSH into the host 
  machine which runs a Curl call to the LXD server which then returns the 
  JSON/Dict object.
  
  While it sounds like a round about way to get there, it's the only way I 
  have found so far to bypass the surrounding issue of getting TLS1_2 to run 
  on OS X  10.8.5 and or Python 2.7.9.
  
 
 
 Well that was one really short lived idea. 
 
 Making those ssh based subprocess calls to the host is just not cutting it 
 from me after all, even if it does work the overhead cost to do them kind of 
 kills the idea for all but simple use.
 
 I was really wanting to stick by and use the LXD Rest server and not have to 
 re-invent the wheel here.
 
 
 Guess it's not going to happen, so instead I've decided to create a Python 
 based Tornado Rest server running on the host and calling the LXD Cli calls.
 
 This way I can back the SSL library down from the TLS1_2 idea. I guess some 
 need that level of security, for now I can live without it.
 
 
 Plus Tornado opens up some other areas to look at doing some container 
 management like ideas.
 
 So this may turn out better over the long haul until LXD matures and becomes 
 a bit more solid.
 
 
 
 
  
  If there is any Python users on this list using the Requests module and has 
  it working with both TLS1_2 and the LXD rest server, please share your 
  process.
 
 
 Again if there is any Pythonista on this LXC mailing list who has been able 
 to get TLS1_2 wrapped and working with Requests.

I just wrote http://tycho.ws/blog/2015/05/lxd-python.html which works
fine for me on Ubuntu.

I do have an old OSX system laying around so I tried it there and got
an SSL error. It looks like the version of SSL it has only has TLS 1.0
built in. I don't really know anything about OSX, but the obvious
solution seems to be to use the above program and a version of openssl
that has TLS 1.2 compiled in. Perhaps upgrading OSX or using some
package manager to give you an new libssl would work.

Tycho

 It would really be great if you could share a blog link or even a bit code as 
 it's one messy thing to get all those parts working. 
 
 
 So in the end LXD rest server is working, but sure is one tough nut to crack 
 right now… hopefully some of these TLS like setup issues will smooth out over 
 time.
 
 -Kevin
 
 
 
 
 
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the best way to report bug issues with LXD rest server?

2015-05-22 Thread Tycho Andersen
On Fri, May 22, 2015 at 09:32:05PM -0700, Kevin LaTona wrote:
 
 On May 22, 2015, at 9:13 PM, Tycho Andersen tycho.ander...@canonical.com 
 wrote:
 
  On Fri, May 22, 2015 at 05:14:06PM -0700, Kevin LaTona wrote:
  
  This past week or so I ran into an issue of not being able to connect a 
  test LXD rest server on my local network.
  
  I've tested this problem out from pretty much every angle I can think of.
  
  Every thing from fresh OS, server, SSL lib installs to upgrades of current 
  running apps on my machines.
  
  
  Pretty much unless I am missing some small fundamental piece that is 
  preventing current shipping vivid server to allow connections to the LXD 
  rest server.
  
  My take is there is a bug .
  
  If this true, what is the best way to let the LXC team know about this to 
  see how to get to next step?
  
  
  To sum it up I am able to connect to a public LXD rest server.
  
  # from vivid container -- public LXD server ( 
  container to public )
  curl -k https://images.linuxcontainers.org/1.0/images
  # {status: Success, metadata: [/1.0/images/e7ae410ee8abeb6
  
  
  No matter how and from what angle I try connecting to a local test LXD 
  rest server it is having connections issues.
  
  # vivid container 10.0.3.5 -- 192.168.0.50:8443 ( container to host 
  machine )
  # this container can ping 192.168.0.50 
  curl -k https://192.168.0.50:8443/1.0/images
  # curl: (35) error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
  certificate
  
  You probably need to pass --cert and --key to curl as well; you can
  see examples of this in the /tests directory.
 
 
 I'll look into that to see if that helps.
 
 
 Yet I am able to hit the images.linuxcontainers.org server from all ….

Yes, images.linuxcontainers.org is not a real LXD server, it just
implements parts of the rest API (the public bits).

 Using OS X, Ubuntu host and from Container and all with the same Curl command 
 calls.
 
 Which has me wondering why that server and not my local LXD rest server?
 
 So far makes zero sense to me and the Rest server should make things simpler 
 in the end.
 
 
 
 Unless I am missing something in configs or settings some where else… or 
 there is bug. 
 
 
 I've chased enough code problems to know when you hammer on it from all 
 possible ways.
 
 And it's working part of the time….. some thing is off as it's just not 
 making sense.
 
 Plus I am not seeing any mention in LXD docs about need for cert and keys for 
 this kind of call.

I suppose there's no reason we couldn't allow requests without a
client cert to work for unauthenticated requests; I don't anticipate
it being a hugely common use case, though, as most people should be
using a client or API to access LXD.

 
 If I need them for the local server I would need them for the pulbic server 
 as well since Linuxcontainers is using self signed cert on that site.

images.linuxcontainers.org shouldn't be using a self signed cert; LXD
does, though.

Tycho

 
 
 -Kevin
 
 
 
 
 
 
 
 
 
  Tycho
  
  
  
  # OS X term window -- vivid server(same 192.168.x.x 
  network)
  curl -k https://192.168.0.50:8443/1.0/images
  # curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 
  alert protocol version
  
  
  
  If any one has any ideas or suggestions please send them along.
  
  -Kevin
  
  
  
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the best way to report bug issues with LXD rest server?

2015-05-22 Thread Tycho Andersen
On Fri, May 22, 2015 at 05:14:06PM -0700, Kevin LaTona wrote:
 
 This past week or so I ran into an issue of not being able to connect a test 
 LXD rest server on my local network.
 
 I've tested this problem out from pretty much every angle I can think of.
 
 Every thing from fresh OS, server, SSL lib installs to upgrades of current 
 running apps on my machines.
 
 
 Pretty much unless I am missing some small fundamental piece that is 
 preventing current shipping vivid server to allow connections to the LXD rest 
 server.
 
 My take is there is a bug .
 
 If this true, what is the best way to let the LXC team know about this to see 
 how to get to next step?
 
 
 To sum it up I am able to connect to a public LXD rest server.
 
 # from vivid container -- public LXD server ( container 
 to public )
 curl -k https://images.linuxcontainers.org/1.0/images
 # {status: Success, metadata: [/1.0/images/e7ae410ee8abeb6
 
 
 No matter how and from what angle I try connecting to a local test LXD rest 
 server it is having connections issues.
 
 # vivid container 10.0.3.5 -- 192.168.0.50:8443 ( container to host 
 machine )
 # this container can ping 192.168.0.50 
 curl -k https://192.168.0.50:8443/1.0/images
 # curl: (35) error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad 
 certificate

You probably need to pass --cert and --key to curl as well; you can
see examples of this in the /tests directory.

Tycho

 
 
 # OS X term window -- vivid server(same 192.168.x.x network)
 curl -k https://192.168.0.50:8443/1.0/images
 # curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert 
 protocol version
 
 
 
 If any one has any ideas or suggestions please send them along.
 
 -Kevin
 
 
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] local lxc commands stuck

2015-05-21 Thread Tycho Andersen
On Wed, May 20, 2015 at 11:59:49AM -0700, Tycho Andersen wrote:

 It seems like we should validate that raw.lxc values are actually
 valid lxc config values before we allow you to insert them; I'll work
 on a patch for this today.

https://github.com/lxc/lxd/pull/689 should be a better UX here, but
I'd still like to see what the config was in your db, just to confirm
that's what is actually causing the problem.

Tycho

 Thanks,
 
 Tycho
 
  Tim
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Getting the LXD rest api to play nice?

2015-05-20 Thread Tycho Andersen
Hi Kevin,

On Wed, May 20, 2015 at 09:59:33AM -0700, Kevin LaTona wrote:
 New twist I would of assumed that using a browser would get some kind of 
 response from LXD rest server.
 
 Safari would not connect.
 
 Mozilla's Firefox did not like the self sign cert and made me approve it, 
 which I need.
 
 Then when I tried to hit LXD rest server  and I got this error message back
 
 An error occurred during a connection to 192.168.0.50:8443. SSL peer cannot 
 verify your certificate. (Error code: ssl_error_bad_cert_alert)
 
 The page you are trying to view cannot be shown because the authenticity 
 of the received data could not be verified.
 Please contact the website owners to inform them of this problem.
 
 
 
 Which seems to be in keeping with all the other issues I have been having 
 going direct in user other methods.
 
 
 
 Which leads me back is any one getting in to the LXD rest server?
 
 If so, how are you doing it?
 
 
 As it seems to me like the SSL cert for the LXD rest server is having issues 
 right now.

The SSL cert LXD uses is generated and not signed by any CA, so your
browser won't respect it (of course, you can click past all the auth
warnings in your browser and actually do a GET if you want).

Tycho

 From all I've read it seems more like a server problem and less of a client 
 problem happening here.
 
 But by no means am I SSL expert on the finer points of SSL issues deep under 
 the hood.
 
 
 Thanks
 -Kevin
 
 
 
 
 
 
 
 
 On May 20, 2015, at 8:50 AM, Kevin LaTona li...@studiosola.com wrote:
 
  
  Can I ask is any one else on this list using the LXD rest api  calls yet?
  
  If yes, is it working for you?
  
  If yes, what OS and App are you using to do this with?
  
  Thanks
  -Kevin
  
  
  
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd git master - lxc stop hangs for 15.04 guest in 14.04 host

2015-05-20 Thread Tycho Andersen
Hi Sean,

On Wed, May 20, 2015 at 03:14:43PM -0400, Sean McNamara wrote:
 I am using an Ubuntu 14.04 host (upstart init) running linux-generic amd64
 kernel, with lxd-spawned unprivileged lxc container running 15.04 (systemd
 init). Latest 15.04 image with no custom software inside the container, and
 nothing installed but the barebones.
 
 I applied a profile specifying a custom bridge device and static IP; this
 config works fine for multiple 14.04 containers. Each of these 14.04
 containers can be successfully started and stopped at will.
 
 But when I create a 15.04 amd64 guest, it will start successfully and is
 usable, but it won't *stop*. Even trying to kill the container processes
 from the host doesn't seem to get rid of it. The container just keeps
 running unless I reboot the system. `lxc stop name` just hangs forever.
 There are only systemd processes and /bin/bash running in the container.
 
 Should the lxc command have a '-f' flag to send SIGKILL to all the child
 processes of a container in case one of them is hung?

You can do lxc stop $container --force, which does effectively this.

 There is nothing useful in the lxc or lxd logs. All it says is that the
 container changed to RUNNING state. Nothing about it changing to any
 stop/stopping state.
 
 P.S. - I looked in dmesg, and none of the stuck processes have generated
 any kernel oops/warn/etc.

I've seen this in some cases too, but not been able to reproduce it. I
thing stgraber looked at it a while ago, cc-ing him here to see what
he knows.

Tycho

 Thanks,
 
 Sean

 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Getting the LXD rest api to play nice?

2015-05-19 Thread Tycho Andersen
Hi Kevin,

On Tue, May 19, 2015 at 04:04:11PM -0700, Kevin LaTona wrote:
 
 Here is the last bits of test….. best I know from requests docs this should 
 work to get some kind of a response.
 
 So far no matter how I try to connect to my LDX rest server I can't get past 
 a ping… so at least it's running at some level is a start.
 
 
 Any thoughts or ideas much appreciated from anyone.
 
 
 import requests
 
 # r = requests.get('https://192.168.0.50:8443/')
 # requests.exceptions.SSLError: [Errno 1] _ssl.c:503: error:1407742E:SSL 
 routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
 
 # r = requests.get('https://192.168.0.50:8443/1.0/', verify=True)
 # requests.exceptions.SSLError: [Errno 1] _ssl.c:503: error:1407742E:SSL 
 routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
 
 
 # r = requests.get('https://192.168.0.50:8443/1.0/',verify=False)
 # requests.exceptions.SSLError: [Errno 1] _ssl.c:503: error:1407742E:SSL 
 routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version

LXD requires TLS 1.2, it looks like perhaps the build of ssl your
python-requests is linked against doesn't provide it.

Tycho

 print(r.text)
 
 
 
 Thanks
 -Kevin
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Getting the LXD rest api to play nice?

2015-05-19 Thread Tycho Andersen
On Tue, May 19, 2015 at 01:05:08PM -0700, Kevin LaTona wrote:
 
 
 
 Right now when I am try sending in a test request call to the LXD rest api 
 using the Python Requests library and it blows up.
 
 At this point no idea if what is going on is a Request library error or the 
 LXD api webserver is choking here.
 
 Any one on this list using the LXD rest API yet?
 
 
 If so, any chance you might share how you have working?
 
 Better yet any one on the list is doing this using the Python Requsts module?
 
 Hate to waste time drilling down into requests, if it's a LXD api issue so 
 early in the release cycle.

There is http://github.com/zulcss/pylxd and the in tree
/scripts/lxd-images, although neither use the requests module.

If you can paste your code and error, perhaps we can provide some
insight.

Tycho

 
 Thanks
 -Kevin
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] local lxc commands stuck

2015-05-14 Thread Tycho Andersen
Hi Tim,

On Fri, May 15, 2015 at 03:43:40PM +1200, Tim Penhey wrote:
 Hi All,
 
 I was messing around with lxd/lxc the other day, and seem to have wedged
 my system somewhat.
 
 Going blindly from documentation in a blog post, I did the following:
 
 $ lxc launch images:ubuntu/trusty/amd64 trusty-lxd-test
 Creating container...done
 Starting container...error: setting config item for the container failed

That's no good. If you're running lxd 0.9 (our latest release as of
Tuesday), can you paste the output of:

lxc info trusty-lxd-test --show-log

 After seeing the error, I tried with sudo:
 
 $ sudo lxc launch images:ubuntu/trusty/amd64 trusty-lxd-test
 Creating container...error: Container exists

sudo shouldn't make a difference here, since the client doesn't do
anything based on what uid it is run as.

 I then went looking for the list:
 
 $ sudo lxc list
 error: setting config item for the container failed
 
 I get the same error without the sudo on the 'lxc list'
 
 
 I can see the container directory in /var/lib/lxd/lxc, but not sure why
 the lxc command has gotten itself into this state.
 
 Q) Should the default install be able to create unprivileged containers?

Yes, all containers are unprivileged by default.

 Q) How can I fix my local lxc command so it doesn't error?

Good question; at this point it looks like your db is probably wedged
and your best bet is to delete /var/lib/lxd (or at least the db) and
try again.

However, it would be useful to have a copy of the db so I could poke
around and see what happened. Also the log output above would be
useful.

Thanks,

Tycho

 Cheers,
 Tim
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Juju use of lxd

2015-05-14 Thread Tycho Andersen
On Fri, May 15, 2015 at 04:03:41PM +1200, Tim Penhey wrote:
 Me again,
 
 Juju uses lxc by default for the local provider. I'm intending to drive
 a piece of work along with a few other core developers to write a lxd
 provider.
 
 To get things started, I need to work out how to do the following:
 
 1) Create a template container as the basis for the juju machines.
This needs to pass a cloud-init file in.

Although we vaguely mention userdata in the spec, it doesn't look like
there is any actual API for setting it (and no backend that actually
puts it in the right place). I've filed
https://github.com/lxc/lxd/issues/665 to track it.

 2) Create a clone of the template passing in another cloud-init file.

We have a templating mechanism described here, that could do something
like this:

https://github.com/lxc/lxd/blob/master/specs/image-handling.md

...although I think it too is not yet implemented.

 We'd like to use the Go library that wraps the REST api already.
 
 
 With the old local provider, we were able to catch the output during the
 creation of the template container in order to make sure the container
 creation finished successfully (we added a simple upstart script to
 shutdown the container at the end of cloud init) as the container had to
 be stopped in order for the clone that follows to work.
 
 Q) is this possible in user space?  Ideally we want to avoid asking the
 user for their password to create the containers as root (as we
 currently do).

Yes, LXD runs as root so when you talk to the daemon as a user, LXD
itself can do root things.

Tycho

 Thanks in advance for suggestions or pointers.
 
 Tim
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trying out migration, getting error: checkpoint failed

2015-05-08 Thread Tycho Andersen
Hi Giles,

Sorry for the delay.

On Thu, May 07, 2015 at 12:32:58PM +0100, Giles Thomas wrote:
 Hi Tycho,
 
 On 06/05/15 16:29, Tycho Andersen wrote:
 Sorry, I did just find one notable exception with the current git master:
 liblxc doesn't complain when excing criu fails. Do you have criu installed
 in a place where liblxc can find it? I posted a patch to fix this
 particular case, but it seems likely that's where your problem is. Tycho
 
 It's installed in /usr/sbin/criu -- the lxc monitor is running as root, so
 that should be OK, right?

I think so, but obviously something is wrong :). If you cat
/proc/`pidof lxd`/environ, is /usr/sbin in its path? It may be worth
upgrading to the lxd/lxd-client from git master; I wrote a patch a few
days ago so you can do:

lxc info migratee --show-log

and get the lxc log output, which should have the error you're
experiencing.

Tycho

 
 All the best,
 
 Giles
 
 -- 
 Giles Thomas gi...@pythonanywhere.com
 
 PythonAnywhere: Develop and host Python from your browser
 https://www.pythonanywhere.com/
 
 A product from PythonAnywhere LLP
 17a Clerkenwell Road, London EC1M 5RD, UK
 VAT No.: GB 893 5643 79
 Registered in England and Wales as company number OC378414.
 Registered address: 28 Ely Place, 3rd Floor, London EC1N 6TD, UK
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Do you have LXD working with Ubuntu 15.04 server?

2015-05-08 Thread Tycho Andersen
On Fri, May 08, 2015 at 12:35:54PM -0700, Kevin LaTona wrote:
 
 So far LXD is not working for me, is it working for you?
 
 In a nutshell I downloaded new Ubuntu 15.04 server.
 
 Installed this on fresh clean server, nothing else is on this machine.
 
 I followed Stephane's blog post here line by line
 
 https://www.stgraber.org/2015/04/21/lxd-getting-started/
 
 After several days of attempts I get nothing but errors.
 
 
 This is an example of the last attempt.
 
 lxc launch images:ubuntu/trusty/i386 ubuntu-32
 Creating container...error: Get http://unix.socket/1.0: dial unix 
 /var/lib/lxd/unix.socket: connection refused

Looks like LXD isn't running. What if you start it and try again?

sudo service lxd start

Tycho

 
 
 So my question is not that I am looking for direct answers to the problem.
 
 Rather what I am looking for what now is.
 
 Has anyone else taken a clean server and installed 15.04 and LDX on it using 
 the current apt-get calls.
 
 
 If you are getting it to work under this kind of install, can you point me to 
 any web based docs that shows how you did that?
 
 
 Thanks
 -Kevin
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD

2015-05-08 Thread Tycho Andersen
Hi Jason,

[cc-ing lxc-users]

On Fri, May 08, 2015 at 06:18:11PM -0400, Jason Rotella wrote:
 Hello Tycho,
 
 I saw your LXD talk:
 
 https://www.youtube.com/watch?v=5MNy9OoiJ70 
 https://www.youtube.com/watch?v=5MNy9OoiJ70
 
 and the following site:
 
 https://linuxcontainers.org/lxd/introduction 
 https://linuxcontainers.org/lxd/introduction
 
 I understand participation in the LXD project is welcome and I’d like to know 
 more about the roadmap for LXD and the features that will be made available, 
 as well as information about the development community, etc.

Definitely! I'm not sure that we have any real roadmap document, but
we work via the github issues, which stgraber keeps nicely organized
into milestones:

https://github.com/lxc/lxd/milestones

Our release cadence for the foreseeable future will be every two weeks
(0.9 will come out on the 12th), so that we can hopefully get features
of our rapidly evolving codebase into the hands of interested users.

r.e. community, hallyn, stgraber, and I (tych0) all idle in
#lxcontainers on freenode and we read the lxc-{users,devel} mailing
lists, so those are probably the best ways to get in touch.

Development for the underlying migration code takes place on the CRIU
list: https://lists.openvz.org/mailman/listinfo/criu

 
 Are you the person to speak with about the above topics?

Sure, or any of the lxd team. I've cc-d lxc-users on this mail, which
will let any of us respond in case I am incapacitated by an angry
wombat.

Tycho

 
 Thanks,
 Jason Rotella
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trying out migration, getting error: checkpoint failed

2015-05-06 Thread Tycho Andersen
Hi Giles,

On Tue, May 05, 2015 at 05:10:56PM +0100, Giles Thomas wrote:
 Hi Tycho,
 
 On 05/05/15 16:50, Tycho Andersen wrote:
 Can you check the lxd stderr by chance (probably lives in /var/log
 somewhere depending on what init system you're using)? I suspect that
 liblxc is rejecting dumping the container in its internal predump checks,
 but the above log doesn't say way unfortunately. Sorry for all the
 confusion, the logging stuff here is still a bit of a mess, although a bit
 better on the current lxd master.
 
 Oddly, there didn't appear to be one; find /var/log -name \*lxd\* just
 found /var/log/lxd.  Nothing relevant-looking in /var/log/upstart/ apart
 from lxc-net.log, which has an Address already in use error:
 
 dnsmasq: failed to create listening socket for 10.0.3.1: Address already
 in use
 Failed to setup lxc-net.
 
 Doubly-oddly, there's a /etc/init/lxd.conf *and* a /etc/init.d/lxd,
 which confuses me a little.  Does that not mean that both init and upstart
 will try to start it?  (My knowledge of the workings of init systems in not
 as in-depth as I would like.) Should I remove one of them then change the
 remaining one to write stdout/err somewhere sensible?

You could, but it may be easier to just stop the lxd service and run
it manually so that it writes stderr to the terminal you're using.

Looking at the code path, it looks like there are a few (really
unlikely) ways it could fail without writing anything to the log (such
as OOM or not being able to make a temporary directory, but it's root
so as long as you have enough disk/ram it /should/ die with some error
message). If you can't find anything, it may be worth building a
liblxc from source and trying to debug things that way.

 I can also see that there are still init and upstart scripts for lxcfs,
 which is a bit messy -- the apt-get remove lxcfs should presumably have
 deleted them -- but they depend on /usr/bin/lxcfs, which definitely
 doesn't exist, so I guess that's not the problem.

`remove` doesn't always remove config files, `purge` is supposed to
though.

Tycho

 
 All the best,
 
 Giles
 
 -- 
 Giles Thomas gi...@pythonanywhere.com
 
 PythonAnywhere: Develop and host Python from your browser
 https://www.pythonanywhere.com/
 
 A product from PythonAnywhere LLP
 17a Clerkenwell Road, London EC1M 5RD, UK
 VAT No.: GB 893 5643 79
 Registered in England and Wales as company number OC378414.
 Registered address: 28 Ely Place, 3rd Floor, London EC1N 6TD, UK
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trying out migration, getting error: checkpoint failed

2015-05-06 Thread Tycho Andersen
On Wed, May 06, 2015 at 08:40:52AM -0600, Tycho Andersen wrote:
 Hi Giles,
 
 On Tue, May 05, 2015 at 05:10:56PM +0100, Giles Thomas wrote:
  Hi Tycho,
  
  On 05/05/15 16:50, Tycho Andersen wrote:
  Can you check the lxd stderr by chance (probably lives in /var/log
  somewhere depending on what init system you're using)? I suspect that
  liblxc is rejecting dumping the container in its internal predump checks,
  but the above log doesn't say way unfortunately. Sorry for all the
  confusion, the logging stuff here is still a bit of a mess, although a bit
  better on the current lxd master.
  
  Oddly, there didn't appear to be one; find /var/log -name \*lxd\* just
  found /var/log/lxd.  Nothing relevant-looking in /var/log/upstart/ apart
  from lxc-net.log, which has an Address already in use error:
  
  dnsmasq: failed to create listening socket for 10.0.3.1: Address already
  in use
  Failed to setup lxc-net.
  
  Doubly-oddly, there's a /etc/init/lxd.conf *and* a /etc/init.d/lxd,
  which confuses me a little.  Does that not mean that both init and upstart
  will try to start it?  (My knowledge of the workings of init systems in not
  as in-depth as I would like.) Should I remove one of them then change the
  remaining one to write stdout/err somewhere sensible?
 
 You could, but it may be easier to just stop the lxd service and run
 it manually so that it writes stderr to the terminal you're using.
 
 Looking at the code path, it looks like there are a few (really
 unlikely) ways it could fail without writing anything to the log (such
 as OOM or not being able to make a temporary directory, but it's root
 so as long as you have enough disk/ram it /should/ die with some error
 message). If you can't find anything, it may be worth building a
 liblxc from source and trying to debug things that way.

Sorry, I did just find one notable exception with the current git
master: liblxc doesn't complain when excing criu fails. Do you have
criu installed in a place where liblxc can find it?

I posted a patch to fix this particular case, but it seems likely
that's where your problem is.

Tycho

  I can also see that there are still init and upstart scripts for lxcfs,
  which is a bit messy -- the apt-get remove lxcfs should presumably have
  deleted them -- but they depend on /usr/bin/lxcfs, which definitely
  doesn't exist, so I guess that's not the problem.
 
 `remove` doesn't always remove config files, `purge` is supposed to
 though.
 
 Tycho
 
  
  All the best,
  
  Giles
  
  -- 
  Giles Thomas gi...@pythonanywhere.com
  
  PythonAnywhere: Develop and host Python from your browser
  https://www.pythonanywhere.com/
  
  A product from PythonAnywhere LLP
  17a Clerkenwell Road, London EC1M 5RD, UK
  VAT No.: GB 893 5643 79
  Registered in England and Wales as company number OC378414.
  Registered address: 28 Ely Place, 3rd Floor, London EC1N 6TD, UK
  
  ___
  lxc-users mailing list
  lxc-users@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Trying out migration, getting error: checkpoint failed

2015-05-05 Thread Tycho Andersen
On Tue, May 05, 2015 at 02:18:23PM +0100, Giles Thomas wrote:
 Hi Tycho,
 
 Thanks for the reply!
 
 On 04/05/15 14:15, Tycho Andersen wrote:
 What versions of criu and liblxc do you have installed? Can you look in
 /var/lib/lxd/lxc/container/lxc.log (or /var/log/lxd/container/lxc.log;
 there was a bug fixed recently that moved it to the latter location).
 
 Heh, I hadn't realised that criu wasn't automatically installed as a
 dependency for lxd (might be worth adding to the blog post?), so I didn't
 have it installed at all.

Ah yeah. It seems I forgot to mention anything about criu at all :)

 However, installing it doesn't fix the problem.  I currently have criu
 1.4-1~ubuntu14.04.1~ppa1 and liblxc1
 1.1.2+master~20150428-0938-0ubuntu1~trusty.   The former was installed just
 with an apt-get install criu, and the latter via following your blog post
 pretty much robotically...
 
 /var/log/lxd/migratee/lxc.log is present, but empty.

Does /var/lib/lxd/lxc/migratee/lxc.log exist?

You might try building criu from their git, I'm not sure the version
in our PPA is new enough to actually work with liblxc 1.1.2.

Tycho

 
 All the best,
 
 Giles
 
 -- 
 Giles Thomas gi...@pythonanywhere.com
 
 PythonAnywhere: Develop and host Python from your browser
 https://www.pythonanywhere.com/
 
 A product from PythonAnywhere LLP
 17a Clerkenwell Road, London EC1M 5RD, UK
 VAT No.: GB 893 5643 79
 Registered in England and Wales as company number OC378414.
 Registered address: 28 Ely Place, 3rd Floor, London EC1N 6TD, UK
 
 ___
 lxc-users mailing list
 lxc-users@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

  1   2   >