[lxc-users] [BUG] lxc-destroy destroying wrong containers

2015-11-09 Thread Tomasz Chmielewski

lxc-destroy may be destroying wrong containers!

To reproduce:

1) have a container you want to clone - here, testvm012d:

# lxc-ls -f
NAMESTATEIPV4
   IPV6  GROUPS  AUTOSTART

---
testvm012d  STOPPED  -   
   - -   NO



2) clone it - but before the command returns, press ctrl+c (say, you 
realized you used a wrong name and want to interrupt):


# lxc-clone -B dir testvm012d testvm13d
[ctrl+c]


3) lxc-ls will now show two containers:

# lxc-ls -f
NAMESTATEIPV4
   IPV6  GROUPS  AUTOSTART

---
testvm012d  STOPPED  -   
   - -   NO
testvm13d   STOPPED  -   
   - -   NO



4) we can see that the "interrupted" container was not fully copied - 
let's remove it then with lxc-destroy:


# du -sh testvm012d testvm13d
462Mtestvm012d
11M testvm13d

# lxc-destroy -n testvm13d

# echo $?
0


5) as expected, lxc-ls only lists the original container now:

# lxc-ls -f
NAMESTATEIPV4
   IPV6  GROUPS  AUTOSTART

---
testvm012d  STOPPED  -   
   - -   NO



6) unfortunately rootfs for the original container is gone:

# du -sh testvm012d
4.0Ktestvm012d

# ls testvm012d/
config



If it matters, my containers are in /srv/lxc/, symlinked from 
/var/lib/lxc/



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] network capabilities in LXD, now and future

2015-11-09 Thread Luis Michael Ibarra
Yonsi,

I think this might be useful[1]

[1]
http://containerops.org/2013/11/19/lxc-networking/?utm_source=Docker+News&utm_campaign=3faedf3ccf-Docker_0_5_0_7_18_2013&utm_medium=email&utm_term=0_c0995b6e8f-3faedf3ccf-235708077



2015-11-09 18:03 GMT-05:00 Yonsy Solis :

> Hi
>
> Now i am using LXD in some production servers (no migration involved for
> obvious reasons) and the network services are LXD default, lxcbr0 bridge
> and published services/ports with DNAT.
>
> Well, i would like to know which other different alternatives i will have
> in LXD. now, for example, can i assign a new IP from the same network that
> the host (host == 10.46.50.100, lxd == 10.40.50.101)? or all the
> alternatives will be bridge oriented ?
>
>
> Yonsy Solis
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users




-- 
Luis Michael Ibarra
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about LXD

2015-11-09 Thread ge...@riseup.net
On 15-11-09 23:05:26, Bostjan Skufca wrote:
> if one wants to have container host cluster with HA feature (restarting
> containers on non-failed hosts), is this something that is planned for LXD,
> or is going the OpenStack route the way to go for the foreseeable future?
> Am I missing some other already existing product (Proxmox I know of)?

Can fully recommend Ganeti [1].

Best regards,
Georg


[1] http://www.ganeti.org/


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] network capabilities in LXD, now and future

2015-11-09 Thread Yonsy Solis

Hi

Now i am using LXD in some production servers (no migration involved 
for obvious reasons) and the network services are LXD default, lxcbr0 
bridge and published services/ports with DNAT.


Well, i would like to know which other different alternatives i will 
have in LXD. now, for example, can i assign a new IP from the same 
network that the host (host == 10.46.50.100, lxd == 10.40.50.101)? or 
all the alternatives will be bridge oriented ?



Yonsy Solis

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC 1.0.8 has been released!

2015-11-09 Thread Stéphane Graber
Hello everyone,

The eighth LXC 1.0 bugfix release is now out!

This includes all bugfixes committed to master since the release of
LXC 1.0.7 almost a year ago!

As usual, the full announcement and changelog may be found at:
https://linuxcontainers.org/lxc/news/

And our tarballs can be downloaded from:
https://linuxcontainers.org/lxc/downloads/


LXC 1.0 is the current long term support release of LXC which comes with
extended support for security and bugfixes up until April 2019.
This is our recommended version for stable production environments.


Stéphane Graber
On behalf of the LXC development team


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about LXD

2015-11-09 Thread Stéphane Graber
On Mon, Nov 09, 2015 at 11:05:26PM +0100, Bostjan Skufca wrote:
> Hello,
> 
> if one wants to have container host cluster with HA feature (restarting
> containers on non-failed hosts), is this something that is planned for LXD,
> or is going the OpenStack route the way to go for the foreseeable future?
> Am I missing some other already existing product (Proxmox I know of)?
> 
> Tnx,
> b.
> 
> PS: I liked your recent presentation/demo, Stephane :)

Hi,

We don't have it on our roadmap right now, but it's something we've
thought about a bit and which shouldn't be terribly difficult to
implement in the future.

The very rough idea so far is that there would be a property on the
container, setting what host the container must be mirrored with. If not
on shared storage, LXD would then do frequent snapshots and send those
across, if CRIU is enabled, the container runtime state could also be
synced.

Our focus right now is on stability and releasing a long term support
release of LXD with Ubuntu 16.04 LTS.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Question about LXD

2015-11-09 Thread Bostjan Skufca
Hello,

if one wants to have container host cluster with HA feature (restarting
containers on non-failed hosts), is this something that is planned for LXD,
or is going the OpenStack route the way to go for the foreseeable future?
Am I missing some other already existing product (Proxmox I know of)?

Tnx,
b.

PS: I liked your recent presentation/demo, Stephane :)
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] 1.1.5 setproctitle bug

2015-11-09 Thread Bostjan Skufca
Hello Tycho

On 9 November 2015 at 21:30, Tycho Andersen 
wrote:

> Hello Boštjan,
>
> On Mon, Nov 09, 2015 at 06:47:42PM +0100, Boštjan Škufca @ Teon.si wrote:
> > Containers start, but this is what I am getting:
> > lxc-start: utils.c: setproctitle: 1461 Invalid argument - setting cmdline
> > failed
> >
> > Kernel 4.2.5, on Slackware 14.1, no cgmanager or lxcfs. Is there anything
> > missing?
>
> No, this is a non-fatal error, so you're just fine. I sent a patch to
> lxc-devel to turn it down to an info message at Stéphane's request
> because he was worried it might freak people out, and it seems he was
> right :)
>

I freaked at the beginning, when it seemed like an error. Then, when I
tried to start the container again after some tweaking, I was positively
surprised that it was already running.





> If you want the fancy proctitles, then you need to enable
> CONFIG_CHECKPOINT_RESTORE in your kernel.
>

I can confirm that enabling CONFIG_CHECKPOINT_RESTORE resolves the issue.
It was hiding behind CONFIG_EXPERT, that is why I haven't noticed it before
(this, coupled with having no use for this ATM).

Tnx for fast response,
b.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] 1.1.5 setproctitle bug

2015-11-09 Thread Tycho Andersen
Hello Boštjan,

On Mon, Nov 09, 2015 at 06:47:42PM +0100, Boštjan Škufca @ Teon.si wrote:
> Containers start, but this is what I am getting:
> lxc-start: utils.c: setproctitle: 1461 Invalid argument - setting cmdline
> failed
> 
> Kernel 4.2.5, on Slackware 14.1, no cgmanager or lxcfs. Is there anything
> missing?

No, this is a non-fatal error, so you're just fine. I sent a patch to
lxc-devel to turn it down to an info message at Stéphane's request
because he was worried it might freak people out, and it seems he was
right :)

If you want the fancy proctitles, then you need to enable
CONFIG_CHECKPOINT_RESTORE in your kernel.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] 1.1.5 setproctitle bug

2015-11-09 Thread Stéphane Graber
Hi,

Tycho just sent a patch to make the message go away.

Unfortunately you'll have to wait for 1.1.6 to see it go away.

Note that in theory kernels >= 3.19 should have the needed feature to
make this work, so I maybe you're missing some kernel configuration too.

Tycho?

On Mon, Nov 09, 2015 at 06:47:42PM +0100, Boštjan Škufca @ Teon.si wrote:
> Containers start, but this is what I am getting:
> lxc-start: utils.c: setproctitle: 1461 Invalid argument - setting cmdline
> failed
> 
> Kernel 4.2.5, on Slackware 14.1, no cgmanager or lxcfs. Is there anything
> missing?
> Before trying 1.1.5, version 1.1.2 was used, no hassles.
> 
> b.


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] 1.1.5 setproctitle bug

2015-11-09 Thread Boštjan Škufca
Containers start, but this is what I am getting:
lxc-start: utils.c: setproctitle: 1461 Invalid argument - setting cmdline
failed

Kernel 4.2.5, on Slackware 14.1, no cgmanager or lxcfs. Is there anything
missing?
Before trying 1.1.5, version 1.1.2 was used, no hassles.

b.


--
Boštjan Škufca
Teon d.o.o. | http://teon.si
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] docker in lxc

2015-11-09 Thread Maxim Patlasov

On 11/06/2015 02:52 PM, Serge Hallyn wrote:

Quoting Maxim Patlasov (mpatla...@parallels.com):

Hi Serge,

I had been working for a while on porting proxy-graphdriver-daemon
to extpoint feature, but then switched to another task. I hope to
switch back in a week. It would be great if we all together come to
agreement about universal way to mount something from host to
container namespace. The simplest way would be to specify the pid of
container "init" as a command-line arg of proxy-daemon, so it could
use the pid for setns(2) directly. Is such an approach safe enough
and will work for all of us?

Surely safe enough.  It's too bad that it requires a graphdriver
per docker-using container, and that it breaks up the container
start (I can't start the graphdriver first and just pass the unix
socket into the container), but I think it's ok.

In general, what do we need exactly?

1. Some way to identify target pid.  We can
a. pass pid to graphdriver on cmdline
b. we can get pid from peercred
2. A MS_SLAVE directory to allow the mount to be passed from the
host to the container (whereupon it can be moved to its final
destination.  This path location needs to be passed to the
graphdriver somehow.  In what you suggest we can just pass the
absolute paths (both on the host and in the container) on the
command line as well.
3. An actual request, presumably sent as
   some-host-dev-id destination-path
over a unix socket.


After more thinking it came to me that may be we can avoid all this 
hassle altogether. When I started to work on proxy-grpahdriver, I tried 
to follow the behavior of devicemapper graphdriver as close as possible. 
In particular, I thought it's important to create mount-points where the 
client (docker inside container) would expect to find them (under 
/var/lib/docker by default). But now, having a look at 
integration-cli/docker_cli_external_graphdriver_unix_test.go from 
https://github.com/docker/docker/pull/13777/files, it seems that 
mount-points may be placed anywhere. If it's true we could place them in 
a directory visible both from host system and inside container -- no 
need for setns(2) at all!


In fact, we need a place visible both from host and container anyway -- 
to have unix socket accessible both by proxy-daemon (running on host) 
and docker-daemon (running inside container). In case of docker 
containers, such a common directory may be specified by "-v" option, like:


$  docker run --privileged --rm -ti -v 
`pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash

(see http://docs.docker.com/opensource/project/set-up-dev-env/)

In case of OpenVZ containers, per-container /tmp is visible as 
/vz/root//tmp from the host, so proxy-daemon might mount requested 
dm-thin thin to /vz/root//tmp/tempXXX and pass "/tmp/tempXXX" back 
to container as result of Get() request.


What about lxc -- does it have an option similar to docker' "-v"?

Thanks,
Maxim
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] docker in lxc

2015-11-09 Thread Maxim Patlasov

Hi Serge,

I had been working for a while on porting proxy-graphdriver-daemon to 
extpoint feature, but then switched to another task. I hope to switch 
back in a week. It would be great if we all together come to agreement 
about universal way to mount something from host to container namespace. 
The simplest way would be to specify the pid of container "init" as a 
command-line arg of proxy-daemon, so it could use the pid for setns(2) 
directly. Is such an approach safe enough and will work for all of us?


Thanks,
Maxim

On 11/06/2015 12:13 PM, Serge Hallyn wrote:

Hey guys,

sorry I tined out for a bit, but now I may have some time.  Have you guys
been working at all together off-list?

-serge

Quoting Tamas Papp (tom...@martos.bme.hu):

Whooo. Thanks in advance, guys!

I'm not a programmer, cannot work by myself on this, but look
forward the feature.
Please keep the list posted, I'm sure many of us are interested and
also willing to test the code.

Cheers,
tamas

On 10/16/2015 07:08 PM, Serge Hallyn wrote:

Absolutely!  I've not actually started working on that.  (I hadn't noticed
that the docker PR was merged)  Maxim (cc:d) is the one who is working on
this at Odin - I think it'd be best if we can all work together.

-serge

Quoting Akshay Karle (akshay.a.ka...@gmail.com):

Hey Serge,

This is something I'm interested in as well. Anyway I could help with the
implementation of the graphdriver proxy?

On Fri, Oct 16, 2015 at 12:10 PM Serge Hallyn 
wrote:


Quoting Tamas Papp (tom...@martos.bme.hu):

On 08/31/2015 03:59 PM, Serge Hallyn wrote:

Quoting Tamas Papp (tom...@martos.bme.hu):

On 08/28/2015 03:48 PM, Serge Hallyn wrote:

Quoting Tamas Papp (tom...@martos.bme.hu):

hi,

I would like to achieve, what is in subject.


However, I cannot get over on this apparmor issue:

[7690496.246952] type=1400 audit(1440757904.938:1130):
apparmor="DENIED" operation="mount" info="failed flags match"
error=-13 profile="lxc-docker" name="/var/lib/docker/aufs/"
pid=32534 comm="docker" flags="rw, private"


I read some post on various forums, that I need to run the lxc
container with unconfined profile.
Is still the case?

Excellent, I've been wanting to bring this up here :)

Maxim at Odin has been working on a proxy graphdriver for
docker.  The PR is at

https://github.com/docker/docker/pull/15594

I'm hoping to test that today and see what else is still
needed.  I would assume a custom apparmor policy will still
be needed, but since the host is doing most of the mounting
you should be able to avoid just being unconfined.

hi,

For the first look it seems to be a big change, that requires a more
qualified one for testing.
Did you take a look?

I've taken a look at the code but haven't built it yet.  (having
some toolchain issues)

https://github.com/docker/docker/pull/13777

This was merged, does it mean, that docker should be usable in LXC

>from this point?
Not exactly.  As you can see from the final comment in

https://github.com/docker/docker/pull/15924

it now means that we can write a graphdriver proxy.  The original
openvz pull request would have been almost all we needed - allowing
the graphdriver to talk over a unix socket to the host where the
requested actions could be done.  The pull request which was accepted
does less - only allowing you to implement your own proxy to talk to
a service on the host.  (that service *also* needs to be written)
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] docker in lxc

2015-11-09 Thread Serge Hallyn
Quoting Maxim Patlasov (mpatla...@parallels.com):
> On 11/06/2015 02:52 PM, Serge Hallyn wrote:
> >Quoting Maxim Patlasov (mpatla...@parallels.com):
> >>Hi Serge,
> >>
> >>I had been working for a while on porting proxy-graphdriver-daemon
> >>to extpoint feature, but then switched to another task. I hope to
> >>switch back in a week. It would be great if we all together come to
> >>agreement about universal way to mount something from host to
> >>container namespace. The simplest way would be to specify the pid of
> >>container "init" as a command-line arg of proxy-daemon, so it could
> >>use the pid for setns(2) directly. Is such an approach safe enough
> >>and will work for all of us?
> >Surely safe enough.  It's too bad that it requires a graphdriver
> >per docker-using container, and that it breaks up the container
> >start (I can't start the graphdriver first and just pass the unix
> >socket into the container), but I think it's ok.
> >
> >In general, what do we need exactly?
> >
> >1. Some way to identify target pid.  We can
> >a. pass pid to graphdriver on cmdline
> >b. we can get pid from peercred
> >2. A MS_SLAVE directory to allow the mount to be passed from the
> >host to the container (whereupon it can be moved to its final
> >destination.  This path location needs to be passed to the
> >graphdriver somehow.  In what you suggest we can just pass the
> >absolute paths (both on the host and in the container) on the
> >command line as well.
> >3. An actual request, presumably sent as
> >   some-host-dev-id destination-path
> >over a unix socket.
> 
> After more thinking it came to me that may be we can avoid all this
> hassle altogether. When I started to work on proxy-grpahdriver, I
> tried to follow the behavior of devicemapper graphdriver as close as
> possible. In particular, I thought it's important to create
> mount-points where the client (docker inside container) would expect
> to find them (under /var/lib/docker by default). But now, having a
> look at integration-cli/docker_cli_external_graphdriver_unix_test.go
> from https://github.com/docker/docker/pull/13777/files, it seems
> that mount-points may be placed anywhere. If it's true we could
> place them in a directory visible both from host system and inside
> container -- no need for setns(2) at all!
> 
> In fact, we need a place visible both from host and container anyway
> -- to have unix socket accessible both by proxy-daemon (running on
> host) and docker-daemon (running inside container). In case of
> docker containers, such a common directory may be specified by "-v"
> option, like:
> 
> $  docker run --privileged --rm -ti -v
> `pwd`:/go/src/github.com/docker/docker dry-run-test /bin/bash
> (see http://docs.docker.com/opensource/project/set-up-dev-env/)
> 
> In case of OpenVZ containers, per-container /tmp is visible as
> /vz/root//tmp from the host, so proxy-daemon might mount
> requested dm-thin thin to /vz/root//tmp/tempXXX and pass
> "/tmp/tempXXX" back to container as result of Get() request.
> 
> What about lxc -- does it have an option similar to docker' "-v"?

If I read it right, it just bind mounts a host dir to a container dir?  In lxc
that's a lxc.mount.entry = /hostdir containerdir none bind.  But if it needs to
be ms_slave or ms_shared then lxc doesn't do it all for you.  (This is how lxd
does insertion of 'disks' from host into container, it always has a ms_slave
directory from /var/lib/lxd/shmounts/$container to /dev/shmounts, somewhat akin
to OpenVZ's /tmp I guess)
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Mon, Nov 09, 2015 at 11:57:53AM -0500, Saint Michael wrote:
> I meant LXD, not LXC. I do use LXC in production.

LXD will be ready for production in 16.04, the next LTS of Ubuntu.

For live migration, we'll have support for it in 16.04 including
migrating all the security primitives of containers. However, there
will absolutely be bits of migration that simply aren't done, so
success with it will be workload dependent. Of course I'll continue to
work to implement these cases, so the situation will continue to
improve as we go along.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Mon, Nov 09, 2015 at 04:32:00PM +, Jamie Brown wrote:
> Not sure why my this thread is being hijacked to ask very general questions 
> :) please don't confuse those with me ;)

Definitely not :)

> Tycho,
> 
> Thanks for the response regarding criu, I figured this may be the case from 
> inspecting the criu source, but wasn't sure how this could be configured and 
> whether there were known limitations. Considering criu has some sort of 
> caching of hard/soft links, could this potentially use a lot of RAM during 
> the snapshot phase if this limit were to be heavily increased?

I think it's mostly to avoid puting GBs of files into criu's images,
which seems like a somewhat artificial concern to me :). For LXC 2.0
(planned for 16.04) I'm going to add a new API function that will let
you configure a lot of this stuff and have a more extensible API than
->checkpoint now.

> I also wondered if you had any response to the mounting issue using ext3/ext4 
> I sent the other day?

You're talking about the .lxd-mounts failure? I've thought about it,
but I can't understand how it's happening. I've heard off-list of
several other people with the issue, though, can you send

cat /proc//mountinfo

of the target LXD?

> I'm finding it odd that I get random successful migrations, but then can't 
> replicate it. I'm always just testing with simple fresh Ubuntu containers. 
> The container that caused the criu error was an exception to this.

By criu error, you mean the ghost file size error?

Tycho

> Many thanks,
> 
> Jamie
> 
> From: lxc-users  on behalf of 
> Tycho Andersen 
> Sent: 09 November 2015 16:00:01
> To: LXC users mailing-list
> Subject: Re: [lxc-users] LXD Live Migration
> 
> On Mon, Nov 09, 2015 at 10:37:42AM -0500, Saint Michael wrote:
> > I must assume that LXC is not ready for production yet. Am I wrong?
> 
> Yes, LXC has been used in production by many large organizations for
> many years.
> 
> Tycho
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC 1.1.5 has been released!

2015-11-09 Thread Stéphane Graber
Hello everyone,

The fifth LXC 1.1 bugfix release is now out!

This includes all bugfixes committed to master since the release of
LXC 1.1.4 last month.

As usual, the full announcement and changelog may be found at:
https://linuxcontainers.org/lxc/news/

And our tarballs can be downloaded from:
https://linuxcontainers.org/lxc/downloads/


LXC 1.1 is the latest stable release of LXC. Note that this isn't a long
term support release and it will only be supported for a year.

For production environments, we still recommend using LXC 1.0 which we
will be supporting until April 2019.


Stéphane Graber
On behalf of the LXC development team


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Saint Michael
I meant LXD, not LXC. I do use LXC in production.

On Mon, Nov 9, 2015 at 11:32 AM, Jamie Brown  wrote:

> Not sure why my this thread is being hijacked to ask very general
> questions :) please don't confuse those with me ;)
>
> Tycho,
>
> Thanks for the response regarding criu, I figured this may be the case
> from inspecting the criu source, but wasn't sure how this could be
> configured and whether there were known limitations. Considering criu has
> some sort of caching of hard/soft links, could this potentially use a lot
> of RAM during the snapshot phase if this limit were to be heavily increased?
>
> I also wondered if you had any response to the mounting issue using
> ext3/ext4 I sent the other day?
>
> I'm finding it odd that I get random successful migrations, but then can't
> replicate it. I'm always just testing with simple fresh Ubuntu containers.
> The container that caused the criu error was an exception to this.
>
> Many thanks,
>
> Jamie
> 
> From: lxc-users  on behalf
> of Tycho Andersen 
> Sent: 09 November 2015 16:00:01
> To: LXC users mailing-list
> Subject: Re: [lxc-users] LXD Live Migration
>
> On Mon, Nov 09, 2015 at 10:37:42AM -0500, Saint Michael wrote:
> > I must assume that LXC is not ready for production yet. Am I wrong?
>
> Yes, LXC has been used in production by many large organizations for
> many years.
>
> Tycho
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] iptables-save not working in unprivileged containers?

2015-11-09 Thread Fiedler Roman
> Von: Tomasz Chmielewski [mailto:man...@wpkg.org]
> 
> On 2015-11-10 01:22, Fiedler Roman wrote:
> 
> >> # iptables -A INPUT -p tcp --dport 22 -j ACCEPT
> >
> > Yes, also here.
> >
> > Compare
> >
> > iptables-save
> >
> > with
> >
> > iptables-save -t filter
> >
> > Later should work. I think, that some special tables cannot be read in
> > unpiv
> > (mangle perhaps).
> 
> It seems to behave just like "iptables-save" executed by non-root user
> (in non-container).

Not on this side:

* Normal user:

$ iptables-save -t filter
iptables-save v1.4.21: Cannot initialize: Permission denied (you must be
root)

* As root in unpriv container: 

# iptables-save -t filter
# Generated by iptables-save v1.4.21 on Mon Nov  9 16:55:27 2015
*filter
:INPUT DROP [0:0]


smime.p7s
Description: S/MIME cryptographic signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] iptables-save not working in unprivileged containers?

2015-11-09 Thread Tomasz Chmielewski

On 2015-11-10 01:22, Fiedler Roman wrote:


# iptables -A INPUT -p tcp --dport 22 -j ACCEPT


Yes, also here.

Compare

iptables-save

with

iptables-save -t filter

Later should work. I think, that some special tables cannot be read in 
unpiv

(mangle perhaps).


It seems to behave just like "iptables-save" executed by non-root user 
(in non-container).



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Jamie Brown
Not sure why my this thread is being hijacked to ask very general questions :) 
please don't confuse those with me ;)

Tycho,

Thanks for the response regarding criu, I figured this may be the case from 
inspecting the criu source, but wasn't sure how this could be configured and 
whether there were known limitations. Considering criu has some sort of caching 
of hard/soft links, could this potentially use a lot of RAM during the snapshot 
phase if this limit were to be heavily increased?

I also wondered if you had any response to the mounting issue using ext3/ext4 I 
sent the other day?

I'm finding it odd that I get random successful migrations, but then can't 
replicate it. I'm always just testing with simple fresh Ubuntu containers. The 
container that caused the criu error was an exception to this.

Many thanks,

Jamie

From: lxc-users  on behalf of 
Tycho Andersen 
Sent: 09 November 2015 16:00:01
To: LXC users mailing-list
Subject: Re: [lxc-users] LXD Live Migration

On Mon, Nov 09, 2015 at 10:37:42AM -0500, Saint Michael wrote:
> I must assume that LXC is not ready for production yet. Am I wrong?

Yes, LXC has been used in production by many large organizations for
many years.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] iptables-save not working in unprivileged containers?

2015-11-09 Thread Fiedler Roman
> Von: lxc-users [mailto:lxc-users-boun...@lists.linuxcontainers.org] Im 
> Auftrag
>
> For some, reason, iptables-save does not seem to be working in
> unprivileged containers.
>
> To reproduce:
>
> - this adds a sample iptables rule:
>
> # iptables -A INPUT -p tcp --dport 22 -j ACCEPT

Yes, also here.

Compare

iptables-save

with

iptables-save -t filter

Later should work. I think, that some special tables cannot be read in unpiv 
(mangle perhaps).

> [Snip]

LG R


smime.p7s
Description: S/MIME cryptographic signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Mon, Nov 09, 2015 at 10:37:42AM -0500, Saint Michael wrote:
> I must assume that LXC is not ready for production yet. Am I wrong?

Yes, LXC has been used in production by many large organizations for
many years.

Tycho
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] iptables-save not working in unprivileged containers?

2015-11-09 Thread Tomasz Chmielewski
For some, reason, iptables-save does not seem to be working in 
unprivileged containers.


To reproduce:

- this adds a sample iptables rule:

# iptables -A INPUT -p tcp --dport 22 -j ACCEPT


- this lists the rule:

# iptables -L -v -n
Chain INPUT (policy ACCEPT 13166 packets, 5194K bytes)
 pkts bytes target prot opt in out source   
destination
0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
0.0.0.0/0tcp dpt:22


Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   
destination


Chain OUTPUT (policy ACCEPT 12620 packets, 656K bytes)
 pkts bytes target prot opt in out source   
destination



- this is supposed to dump iptables rules to stdout - but it doesn't:

# iptables-save
#


Any idea how to make "iptables-save" working in unprivileged lxc 
containers?



Tomasz Chmielewski
http://wpkg.org

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Bostjan Skufca
Depends on what your requirements for "production" are. Live migration? I
guess not. Environment isolation for more-or-less trusted containers? Yes,
using it here for quite a while (since 1.0.6 - lxc, not lxd), if possible
as unprivileged containers, as it removes A LOT of attack surface for host.


b.


On 9 November 2015 at 16:37, Saint Michael  wrote:

> I must assume that LXC is not ready for production yet. Am I wrong?
>
> On Mon, Nov 9, 2015 at 9:54 AM, Tycho Andersen <
> tycho.ander...@canonical.com> wrote:
>
>> On Fri, Nov 06, 2015 at 08:43:33AM +, Jamie Brown wrote:
>> > I’ve just discovered a new failure on a different container too;
>> >
>> > # lxc move host2:nexus host1:nexus
>> > error: Error transferring container data: checkpoint failed:
>> > (00.355457) Error (files-reg.c:422): Can't dump ghost file
>> /usr/local/sonatype-work/nexus/tmp/jar_cache5838699621686145685.tmp of
>> 1177738 size, increase limit
>> > (00.355477) Error (cr-dump.c:1255): Dump files (pid: 22072) failed with
>> -1
>> > (00.357100) Error (cr-dump.c:1617): Dumping FAILED.
>>
>> So this is actually an error because a default limit in criu is not
>> high enough. You can set this via the --ghost-limit in criu, but LXC
>> currently exposes no way to set this, although I'm hoping to add a new
>> API call to allow people to set stuff like this in the near future.
>>
>> Thanks,
>>
>> Tycho
>>
>> >
>> >
>> >
>> > On 06/11/2015, 08:40, "lxc-users on behalf of Jamie Brown" <
>> lxc-users-boun...@lists.linuxcontainers.org on behalf of
>> jamie.br...@mpec.co.uk> wrote:
>> >
>> > >Tycho,
>> > >
>> > >Thanks for your help.
>> > >
>> > >The kernels were in fact different versions, though I’m not sure how I
>> got into that state! So they’re now both running 3.19.0.
>> > >
>> > >Now, I at least receive the same error when migrating in both
>> directions;
>> > ># lxc move host2:test host1:test2
>> > >error: Error transferring container data: restore failed:
>> > >(00.008103)  1: Error (mount.c:2030): Can't mount at
>> ./dev/.lxd-mounts: No such file or directory
>> > >
>> > ># lxc move host1:test1 host2:test1
>> > >error: Error transferring container data: restore failed:
>> > >(00.008103) 1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts:
>> No such file or directory
>> > >
>> > >
>> > >
>> > >
>> > >The backing store is the default (directory based). However, on host2
>> the /var/lib/lxd/containers directory is a symlink to an ext3 mount. On
>> host1 they’re on ext4, is that likely to cause any issues?
>> > >
>> > >The strange thing is, [randomly] the live move DOES succeed. I’ve
>> definitely migrated a clean [running] container about 3 times from host2 to
>> host1, but then when I try again with a new container it fails. This even
>> worked before I updated the kernel. However, I can’t seem to find specific
>> steps to replicate the successful move. I’ve never succeeded in migrating
>> the same container back from host1 to host2 without stopping it. This is
>> what is concerning me the most, I would expect either permanent failure or
>> permanent success. I keep gaining false hope because the first time I
>> migrated a container after updating the kernel it worked, so I thought,
>> problem solved! But then I couldn’t migrate another :(
>> > >
>> > >-- Jamie
>> > >
>> > >
>> > >
>> > >05/11/2015, 16:58, "lxc-users on behalf of Tycho Andersen" <
>> lxc-users-boun...@lists.linuxcontainers.org on behalf of
>> tycho.ander...@canonical.com> wrote:
>> > >
>> > >>Hi Jamie,
>> > >>
>> > >>Thanks for trying it out.
>> > >>
>> > >>On Thu, Nov 05, 2015 at 11:39:43AM +, Jamie Brown wrote:
>> > >>> Hello again,
>> > >>>
>> > >>> Oddly, I've now re-installed the old server and configured it
>> identically to before (except now using RAID) and tried migrating a
>> container back and I am getting a different failure;
>> > >>>
>> > >>> # lxc move host2:test host1:test
>> > >>>
>> > >>> error: Error transferring container data: restore failed:
>> > >>> (00.007414)  1: Error (mount.c:2030): Can't mount at
>> ./dev/.lxd-mounts: No such file or directory
>> > >>> (00.026443) Error (cr-restore.c:1939): Restoring FAILED.
>> > >>>
>> > >>> The container appears in the remote container list whilst moving,
>> but then after failure it is deleted and it is in the STOPPED state on the
>> source host.
>> > >>
>> > >>Right, the restore failed, so the container had already been stopped
>> > >>from the dump, so it was stopped on the target. What we should really
>> > >>do is leave it in a frozen state after the dump, and once the restore
>> > >>succeeds then we can kill it. Hopefully that's something I can
>> > >>implement this cycle.
>> > >>
>> > >>As for the actual error, sounds like the target LXD didn't have
>> > >>shmounts but the source one did. Are they using different backing
>> > >>stores? What version of LXD are they?
>> > >>
>> > >>>
>> > >>> Here's the output from the log, not sure how much is relevant to
>> the migration attempt.

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Saint Michael
I must assume that LXC is not ready for production yet. Am I wrong?

On Mon, Nov 9, 2015 at 9:54 AM, Tycho Andersen  wrote:

> On Fri, Nov 06, 2015 at 08:43:33AM +, Jamie Brown wrote:
> > I’ve just discovered a new failure on a different container too;
> >
> > # lxc move host2:nexus host1:nexus
> > error: Error transferring container data: checkpoint failed:
> > (00.355457) Error (files-reg.c:422): Can't dump ghost file
> /usr/local/sonatype-work/nexus/tmp/jar_cache5838699621686145685.tmp of
> 1177738 size, increase limit
> > (00.355477) Error (cr-dump.c:1255): Dump files (pid: 22072) failed with
> -1
> > (00.357100) Error (cr-dump.c:1617): Dumping FAILED.
>
> So this is actually an error because a default limit in criu is not
> high enough. You can set this via the --ghost-limit in criu, but LXC
> currently exposes no way to set this, although I'm hoping to add a new
> API call to allow people to set stuff like this in the near future.
>
> Thanks,
>
> Tycho
>
> >
> >
> >
> > On 06/11/2015, 08:40, "lxc-users on behalf of Jamie Brown" <
> lxc-users-boun...@lists.linuxcontainers.org on behalf of
> jamie.br...@mpec.co.uk> wrote:
> >
> > >Tycho,
> > >
> > >Thanks for your help.
> > >
> > >The kernels were in fact different versions, though I’m not sure how I
> got into that state! So they’re now both running 3.19.0.
> > >
> > >Now, I at least receive the same error when migrating in both
> directions;
> > ># lxc move host2:test host1:test2
> > >error: Error transferring container data: restore failed:
> > >(00.008103)  1: Error (mount.c:2030): Can't mount at
> ./dev/.lxd-mounts: No such file or directory
> > >
> > ># lxc move host1:test1 host2:test1
> > >error: Error transferring container data: restore failed:
> > >(00.008103) 1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts:
> No such file or directory
> > >
> > >
> > >
> > >
> > >The backing store is the default (directory based). However, on host2
> the /var/lib/lxd/containers directory is a symlink to an ext3 mount. On
> host1 they’re on ext4, is that likely to cause any issues?
> > >
> > >The strange thing is, [randomly] the live move DOES succeed. I’ve
> definitely migrated a clean [running] container about 3 times from host2 to
> host1, but then when I try again with a new container it fails. This even
> worked before I updated the kernel. However, I can’t seem to find specific
> steps to replicate the successful move. I’ve never succeeded in migrating
> the same container back from host1 to host2 without stopping it. This is
> what is concerning me the most, I would expect either permanent failure or
> permanent success. I keep gaining false hope because the first time I
> migrated a container after updating the kernel it worked, so I thought,
> problem solved! But then I couldn’t migrate another :(
> > >
> > >-- Jamie
> > >
> > >
> > >
> > >05/11/2015, 16:58, "lxc-users on behalf of Tycho Andersen" <
> lxc-users-boun...@lists.linuxcontainers.org on behalf of
> tycho.ander...@canonical.com> wrote:
> > >
> > >>Hi Jamie,
> > >>
> > >>Thanks for trying it out.
> > >>
> > >>On Thu, Nov 05, 2015 at 11:39:43AM +, Jamie Brown wrote:
> > >>> Hello again,
> > >>>
> > >>> Oddly, I've now re-installed the old server and configured it
> identically to before (except now using RAID) and tried migrating a
> container back and I am getting a different failure;
> > >>>
> > >>> # lxc move host2:test host1:test
> > >>>
> > >>> error: Error transferring container data: restore failed:
> > >>> (00.007414)  1: Error (mount.c:2030): Can't mount at
> ./dev/.lxd-mounts: No such file or directory
> > >>> (00.026443) Error (cr-restore.c:1939): Restoring FAILED.
> > >>>
> > >>> The container appears in the remote container list whilst moving,
> but then after failure it is deleted and it is in the STOPPED state on the
> source host.
> > >>
> > >>Right, the restore failed, so the container had already been stopped
> > >>from the dump, so it was stopped on the target. What we should really
> > >>do is leave it in a frozen state after the dump, and once the restore
> > >>succeeds then we can kill it. Hopefully that's something I can
> > >>implement this cycle.
> > >>
> > >>As for the actual error, sounds like the target LXD didn't have
> > >>shmounts but the source one did. Are they using different backing
> > >>stores? What version of LXD are they?
> > >>
> > >>>
> > >>> Here's the output from the log, not sure how much is relevant to the
> migration attempt.
> > >>>
> > >>> # lxc info --show-log test
> > >>> ...
> > >>> lxc 1446723150.396 DEBUGlxc_start - start.c:__lxc_start:1210 -
> unknown exit status for init: 9
> > >>> lxc 1446723150.396 DEBUGlxc_start -
> start.c:__lxc_start:1215 - Pushing physical nics back to host namespace
> > >>> lxc 1446723150.396 DEBUGlxc_start -
> start.c:__lxc_start:1218 - Tearing down virtual network devices used by
> container
> > >>> lxc 1446723150.396 WARN lxc_conf -
> 

Re: [lxc-users] LXC 1.1.4 on Fedora 23

2015-11-09 Thread Stéphane Graber
On Mon, Nov 09, 2015 at 11:44:41AM +0100, Király, István wrote:
> Ooops, ...  .)
> 
> Hello list,
> 
> I have some minor issues with lxc 1.1.14 on Fedora 23. Error messages are:
> 
> DNF when udating cash for container creation:
> 
> warning:
> /var/cache/dnf/updates-e042e478e0621ea6/packages/crypto-policies-20151104-1.gitf1cba5f.fc23.noarch.rpm:
> Header V3 RSA/SHA256 Signature, key ID 34ec9cba: NOKEY
> The downloaded packages were saved in cache till the next successful
> transaction.
> You can remove cached packages by executing 'dnf clean packages'.
> Traceback (most recent call last):
>   File "/usr/bin/dnf", line 35, in 
> main.user_main(sys.argv[1:], exit_code=True)
>   File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 198, in
> user_main
> errcode = main(args)
>   File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 84, in main
> return _main(base, args)
>   File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 144, in
> _main
> ret = resolving(cli, base)
>   File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 173, in
> resolving
> base.do_transaction(display=displays)
>   File "/usr/lib/python3.4/site-packages/dnf/cli/cli.py", line 220, in
> do_transaction
> self.gpgsigcheck(downloadpkgs)
>   File "/usr/lib/python3.4/site-packages/dnf/cli/cli.py", line 258, in
> gpgsigcheck
> self.getKeyForPackage(po, fn)
>   File "/usr/lib/python3.4/site-packages/dnf/base.py", line 1818, in
> getKeyForPackage
> keys = dnf.crypto.retrieve(keyurl, repo)
>   File "/usr/lib/python3.4/site-packages/dnf/crypto.py", line 124, in
> retrieve
> keyinfos = rawkey2infos(handle)
>   File "/usr/lib/python3.4/site-packages/dnf/crypto.py", line 107, in
> rawkey2infos
> ctx.import_(key_fo)
> gpgme.GpgmeError: (7, 32870, 'Inappropriate ioctl for device')
> Update finished

Looks like a dnf issue, might be related to fs caps or something like
that, if inside an unprivileged container, or some interaction with
dropped capabilities, kinda hard to know as the error is pretty generic.

> And on container start
> 
> lxc-start -o /srv/default-host.local/lxc.log -n default-host.local -d
>   lxc-start 1447065516.088 ERRORlxc_utils -
> utils.c:open_without_symlink:1575 - No such file or directory - Error
> examining fuse in /usr/local/lib/lxc/rootfs/sys/fs/fuse/connections

That should probably get demoted to a warning or just be silenced
altogether, it's not an actual problem, LXC will bind-mount a bunch of
stuff if available and not fail if they're not (entries marked
optional), that's the result from one of those.

> Seems that containers still work, .. but I wonder what these errors are. .)
> 
> Greetings, ...
> 
> On Mon, Nov 9, 2015 at 11:41 AM, Király, István  wrote:
> 
> > Hello list,
> >
> > I have some minor issues with lxc 1.1.14 on Fedora 23. Error messages are:
> >
> > DNF when udating cash for container creation:
> >
> >
> > --
> >  Király István
> > +36 209 753 758
> > lak...@d250.hu
> > 
> >
> 
> 
> 
> -- 
>  Király István
> +36 209 753 758
> lak...@d250.hu
> 

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD Live Migration

2015-11-09 Thread Tycho Andersen
On Fri, Nov 06, 2015 at 08:43:33AM +, Jamie Brown wrote:
> I’ve just discovered a new failure on a different container too;
> 
> # lxc move host2:nexus host1:nexus
> error: Error transferring container data: checkpoint failed:
> (00.355457) Error (files-reg.c:422): Can't dump ghost file 
> /usr/local/sonatype-work/nexus/tmp/jar_cache5838699621686145685.tmp of 
> 1177738 size, increase limit
> (00.355477) Error (cr-dump.c:1255): Dump files (pid: 22072) failed with -1
> (00.357100) Error (cr-dump.c:1617): Dumping FAILED.

So this is actually an error because a default limit in criu is not
high enough. You can set this via the --ghost-limit in criu, but LXC
currently exposes no way to set this, although I'm hoping to add a new
API call to allow people to set stuff like this in the near future.

Thanks,

Tycho

> 
> 
> 
> On 06/11/2015, 08:40, "lxc-users on behalf of Jamie Brown" 
>  jamie.br...@mpec.co.uk> wrote:
> 
> >Tycho,
> >
> >Thanks for your help.
> >
> >The kernels were in fact different versions, though I’m not sure how I got 
> >into that state! So they’re now both running 3.19.0.
> >
> >Now, I at least receive the same error when migrating in both directions;
> ># lxc move host2:test host1:test2
> >error: Error transferring container data: restore failed:
> >(00.008103)  1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts: 
> >No such file or directory
> >
> ># lxc move host1:test1 host2:test1
> >error: Error transferring container data: restore failed:
> >(00.008103) 1: Error (mount.c:2030): Can't mount at ./dev/.lxd-mounts: No 
> >such file or directory
> >
> >
> >
> >
> >The backing store is the default (directory based). However, on host2 the 
> >/var/lib/lxd/containers directory is a symlink to an ext3 mount. On host1 
> >they’re on ext4, is that likely to cause any issues?
> >
> >The strange thing is, [randomly] the live move DOES succeed. I’ve definitely 
> >migrated a clean [running] container about 3 times from host2 to host1, but 
> >then when I try again with a new container it fails. This even worked before 
> >I updated the kernel. However, I can’t seem to find specific steps to 
> >replicate the successful move. I’ve never succeeded in migrating the same 
> >container back from host1 to host2 without stopping it. This is what is 
> >concerning me the most, I would expect either permanent failure or permanent 
> >success. I keep gaining false hope because the first time I migrated a 
> >container after updating the kernel it worked, so I thought, problem solved! 
> >But then I couldn’t migrate another :(
> >
> >-- Jamie
> >
> >
> >
> >05/11/2015, 16:58, "lxc-users on behalf of Tycho Andersen" 
> > >tycho.ander...@canonical.com> wrote:
> >
> >>Hi Jamie,
> >>
> >>Thanks for trying it out.
> >>
> >>On Thu, Nov 05, 2015 at 11:39:43AM +, Jamie Brown wrote:
> >>> Hello again,
> >>> 
> >>> Oddly, I've now re-installed the old server and configured it identically 
> >>> to before (except now using RAID) and tried migrating a container back 
> >>> and I am getting a different failure;
> >>> 
> >>> # lxc move host2:test host1:test
> >>> 
> >>> error: Error transferring container data: restore failed:
> >>> (00.007414)  1: Error (mount.c:2030): Can't mount at 
> >>> ./dev/.lxd-mounts: No such file or directory
> >>> (00.026443) Error (cr-restore.c:1939): Restoring FAILED.
> >>> 
> >>> The container appears in the remote container list whilst moving, but 
> >>> then after failure it is deleted and it is in the STOPPED state on the 
> >>> source host.
> >>
> >>Right, the restore failed, so the container had already been stopped
> >>from the dump, so it was stopped on the target. What we should really
> >>do is leave it in a frozen state after the dump, and once the restore
> >>succeeds then we can kill it. Hopefully that's something I can
> >>implement this cycle.
> >>
> >>As for the actual error, sounds like the target LXD didn't have
> >>shmounts but the source one did. Are they using different backing
> >>stores? What version of LXD are they?
> >>
> >>> 
> >>> Here's the output from the log, not sure how much is relevant to the 
> >>> migration attempt.
> >>> 
> >>> # lxc info --show-log test
> >>> ...
> >>> lxc 1446723150.396 DEBUGlxc_start - start.c:__lxc_start:1210 - 
> >>> unknown exit status for init: 9
> >>> lxc 1446723150.396 DEBUGlxc_start - 
> >>> start.c:__lxc_start:1215 - Pushing physical nics back to host namespace
> >>> lxc 1446723150.396 DEBUGlxc_start - 
> >>> start.c:__lxc_start:1218 - Tearing down virtual network devices used by 
> >>> container
> >>> lxc 1446723150.396 WARN lxc_conf - 
> >>> conf.c:lxc_delete_network:2939 - failed to remove interface '(null)'
> >>> lxc 1446723150.396 INFO lxc_error - 
> >>> error.c:lxc_error_set_and_log:55 - child <10499> ended on signal (9)
> >>> lxc 1446723150.396 WARN lxc_conf - 
> >>> conf.c:lxc_delete_network:2939 - failed to remove in

Re: [lxc-users] LXC 1.1.4 on Fedora 23

2015-11-09 Thread Király , István
Ooops, ...  .)

Hello list,

I have some minor issues with lxc 1.1.14 on Fedora 23. Error messages are:

DNF when udating cash for container creation:

warning:
/var/cache/dnf/updates-e042e478e0621ea6/packages/crypto-policies-20151104-1.gitf1cba5f.fc23.noarch.rpm:
Header V3 RSA/SHA256 Signature, key ID 34ec9cba: NOKEY
The downloaded packages were saved in cache till the next successful
transaction.
You can remove cached packages by executing 'dnf clean packages'.
Traceback (most recent call last):
  File "/usr/bin/dnf", line 35, in 
main.user_main(sys.argv[1:], exit_code=True)
  File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 198, in
user_main
errcode = main(args)
  File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 84, in main
return _main(base, args)
  File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 144, in
_main
ret = resolving(cli, base)
  File "/usr/lib/python3.4/site-packages/dnf/cli/main.py", line 173, in
resolving
base.do_transaction(display=displays)
  File "/usr/lib/python3.4/site-packages/dnf/cli/cli.py", line 220, in
do_transaction
self.gpgsigcheck(downloadpkgs)
  File "/usr/lib/python3.4/site-packages/dnf/cli/cli.py", line 258, in
gpgsigcheck
self.getKeyForPackage(po, fn)
  File "/usr/lib/python3.4/site-packages/dnf/base.py", line 1818, in
getKeyForPackage
keys = dnf.crypto.retrieve(keyurl, repo)
  File "/usr/lib/python3.4/site-packages/dnf/crypto.py", line 124, in
retrieve
keyinfos = rawkey2infos(handle)
  File "/usr/lib/python3.4/site-packages/dnf/crypto.py", line 107, in
rawkey2infos
ctx.import_(key_fo)
gpgme.GpgmeError: (7, 32870, 'Inappropriate ioctl for device')
Update finished


And on container start

lxc-start -o /srv/default-host.local/lxc.log -n default-host.local -d
  lxc-start 1447065516.088 ERRORlxc_utils -
utils.c:open_without_symlink:1575 - No such file or directory - Error
examining fuse in /usr/local/lib/lxc/rootfs/sys/fs/fuse/connections

Seems that containers still work, .. but I wonder what these errors are. .)

Greetings, ...

On Mon, Nov 9, 2015 at 11:41 AM, Király, István  wrote:

> Hello list,
>
> I have some minor issues with lxc 1.1.14 on Fedora 23. Error messages are:
>
> DNF when udating cash for container creation:
>
>
> --
>  Király István
> +36 209 753 758
> lak...@d250.hu
> 
>



-- 
 Király István
+36 209 753 758
lak...@d250.hu

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] LXC 1.1.4 on Fedora 23

2015-11-09 Thread Király , István
Hello list,

I have some minor issues with lxc 1.1.14 on Fedora 23. Error messages are:

DNF when udating cash for container creation:


-- 
 Király István
+36 209 753 758
lak...@d250.hu

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users