[lxc-users] Failure in "lxc copy" and in "lxc delete"

2019-02-25 Thread Pierre Couderc

root@server:~# lxc copy  rmote:nginx nginx
Transferring container: nginx-s1: 414.76MB (11.87MB/s)

transfer stops after 414.76M and "lxc copy" never returns.


This is maybe because there is a problem on the source container. It 
seems ok but I had tried (before) to delete snapshot nginx-s1 and got an 
error :


root@rmote:~# lxc delete  nginx/nginx-s1
Error: Failed to run: btrfs subvolume delete 
/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/nginx/nginx-s1: 
ERROR: cannot delete 
'/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/nginx/nginx-s1': 
Operation not permitted
Delete subvolume (no-commit): 
'/var/snap/lxd/common/lxd/storage-pools/default/containers-snapshots/nginx/nginx-s1'


Anayway the source nginx container seems to run ok.

Thanks for any help



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Question on strange ls -lh

2019-01-05 Thread Pierre Couderc

Strange ls -lh :

On the host :

root@server:/# ls -lh /var/lib/lxd/containers/bind/rootfs/
total 0
...
drwxr-xr-x 1 231072 231072    0 Oct  4 07:25 opt
drwxr-xr-x 1 231072 231072    0 Jun 26  2018 proc
drwx-- 1 231072 231072  102 Jan  5 19:19 root
...

The same directory in the "bind" container :

root@bind:/etc/bind/normal# ls -lh /
total 0

drwxr-xr-x   1 root   root   0 Oct  4 07:25 opt
dr-xr-xr-x 250 nobody nogroup    0 Oct 30 09:21 proc
drwx--   1 root   root 102 Jan  5 19:19 root
...

Question :

1- why /proc is owned by nobody.nogroup inside the container and on the 
host by  231072.231072 , which seems to be root.root ?


2- how can this occur ?

The container is in bad state as "ps -aux" work no more...

Thanks all,

and Happy new year to all, and particularly the team of brave developers...

PC



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Mysterious behaviour

2018-11-28 Thread Pierre Couderc
1- I was under a ssh console connected to a lxd container (nginx web 
server) named  www.



root@www:~# ls -lh /var/www
total 0

It should not be empty !

And the web server works but has no data (404 errors) !

2- on another console I ask :

root@server:~# lxc config device show www
...
srv_www:
  path: /var/www
  source: /srv/www
  type: disk

3- I stop the container and restart it

Then, all seems normal, my /var/www is correct and web server is ok.

But the ECDSA ssh key of the www container has changed. Maybe it is not 
related (that is maybe, it was a mistake of mine) but I mention it.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] unable to start any container ("Permission denied - Failed to mount")

2018-09-28 Thread Pierre Couderc


On 09/24/2018 03:27 PM, Tomasz Chmielewski wrote:

I'm not able to start any container today.

# lxc start preprod-app
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart preprod-app 
/var/snap/lxd/common/lxd/containers 
/var/snap/lxd/common/lxd/logs/preprod-app/lxc.conf:

Try `lxc info --show-log preprod-app` for more info

I got this too trying to install lxd with snap.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd under stretch

2018-09-25 Thread Pierre Couderc

On 09/25/2018 03:35 PM, Pierre Couderc wrote:


I have found a solution without building myself !
By using some specific ubuntu .deb in  debian environment 
(liblxc1_3.0.0-0ubuntu2_amd64.deb lxd_3.0.0-0ubuntu4_amd64.deb 
liblxc-common_3.0.0-0ubuntu2_amd64.deb 
lxd-client_3.0.0-0ubuntu4_amd64.deb). It is being tested.

Anyway, I have a problem to start my first container. Container log is :
lxc 20180925134923.392 ERROR    lxc_utils - utils.c:open_devnull:1751 - 
Permission denied - Can't open /dev/null
lxc 20180925134923.392 ERROR    lxc_sync - sync.c:__sync_wait:57 - An 
error occurred in another process (expected sequence number 5)
lxc 20180925134923.424 ERROR    lxc_container - 
lxccontainer.c:wait_on_daemonized_start:824 - Received container state 
"ABORTING" instead of "RUNNING"
lxc 20180925134923.424 ERROR    lxc_start - start.c:__lxc_start:1866 - 
Failed to spawn container "blessed-hamster"
lxc 20180925134923.437 WARN lxc_commands - 
commands.c:lxc_cmd_rsp_recv:130 - Connection reset by peer - Failed to 
receive response for command "get_cgroup"


My kernel is 4.18.0.1
apparmor is 2.13.-8

Init is standard with a bridge added after init with :
lxc network attach-profile br0 default eth0

I do not find something similar by googling.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd under stretch

2018-09-25 Thread Pierre Couderc


On 09/25/2018 08:24 AM, Fajar A. Nugraha wrote:



I'd recommend you try lxd snap first instead of building yourself.
https://packages.debian.org/snapd
https://snapcraft.io/lxd
Mmm, I la not ready to use a system which updates my sytem automatically 
without asking me.


If it doesn't fit your requirement and you still need to build it 
yourself, try

https://github.com/lxc/lxd#installing-lxd-from-source
https://github.com/lxc/lxd/releases/tag/lxd-3.5

If you've followed the above and still have problems, it'd help if you 
write in detail what those problems are (i.e. not just "instabilities")



I have found a solution without building myself !
By using some specific ubuntu .deb in  debian environment 
(liblxc1_3.0.0-0ubuntu2_amd64.deb lxd_3.0.0-0ubuntu4_amd64.deb 
liblxc-common_3.0.0-0ubuntu2_amd64.deb 
lxd-client_3.0.0-0ubuntu4_amd64.deb). It is being tested.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd under stretch

2018-09-25 Thread Pierre Couderc

On 09/25/2018 08:24 AM, Fajar A. Nugraha wrote:
If it doesn't fit your requirement and you still need to build it 
yourself, try

https://github.com/lxc/lxd#installing-lxd-from-source
https://github.com/lxc/lxd/releases/tag/lxd-3.5



Thank you very much. I go on.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd under stretch

2018-09-24 Thread Pierre Couderc



On 09/24/2018 10:20 AM, Andrey Repin wrote:


If you are asking such questions, you definitely should not build anything
yourself.

Thank you for you efficient  answer that I definitely intend not to 
follow ;)

Maybe my question is not very subtle. But you could answer me something like
http://archive.ubuntu.com/ubuntu/pool/main/l/lxc/lxc_3.0.1-0ubuntu1~18.04.2.debian.tar.xz
or at least confirm me if it is a correct answer ?



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Error launching first container

2018-09-23 Thread Pierre Couderc



On 09/23/2018 09:30 PM, Andrey Repin wrote:


I would check that you don't have two name resolution daemons running at once.
All too often I've seen systemd-resolved and resolvconf running in parallel,
causing all sort of trouble.
You have to select one of them and disable another, if this is your case.


Thank you, I shall check that I have not 2. But what is sure is that you 
are right : my name resolution does not work ! But the error meessage is 
a bit mysterious...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Error launching first container

2018-09-23 Thread Pierre Couderc
Thank  you.  I have used lxd since months but usually it was after 
compiling from sources (under debian).
I have not changed anything except installing with apt under bionic 
("reformatting" a computer where there was lxd under debian)
I do not imagine bionic adding a hidden firewall but I have no 
experience of Ubuntu, only debian.

and 127.0.0.53, is a loopback address.


On 09/23/2018 10:50 AM, toshinao wrote:

Hi.

Your machine may be located behind firewall. In my case, even if shell’s 
environmental
variables such as http_proxy are properly configured, lxc tried to access 
127.0.0.53.
(I am not sure wether it was 53.)

I do not to how to directly access cloud-images.ubuntu.com from behind firewall.
Here’s what I did. I prepared lxc/lxc on two locations A: a machine which has 
direct
internet access and B: a machine behind firewall. The goal is to launch 
container
on B. (1) I launched a container on A, (2) generate image of the container on A,
(3) copy the image from A to B, (4) import the image on B, (5) finally launch
container from the imported image on B.

Of course these steps are tedious and I hope there’s a better way.


2018/09/23 16:12、Pierre Couderc のメール:

lxd just installed by apt  on a freshly installed bionic, and after lxd init :

lxc launch ubuntu:16.04 my-ubuntu
Creating my-ubuntu
Error: Failed container creation: Get 
https://cloud-images.ubuntu.com/releases/streams/v1/index.json: lookup 
cloud-images.ubuntu.com on 127.0.0.53:53: server misbehaving

I have checked  that the URL 
https://cloud-images.ubuntu.com/releases/streams/v1/index.json: faiils (404) 
but not https://cloud-images.ubuntu.com/releases/streams/v1/index.json (witohut 
last :)

How fo I create my 1rst container ?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Why I cannot remove this (emtpy) directory ?

2018-09-23 Thread Pierre Couderc

On 09/23/2018 11:43 AM, Andrey Repin wrote:

Greetings, Pierre Couderc!

Thnk you.



root@server:~/ls# ls -lha
/var/lib/lxd/storage-pools/default/containers/ajeter/
total 0
drwx--x--x 1 root root  0 Sep 22 15:17 .
drwxr-xr-x 1 root root 12 Sep 23 07:09 ..
root@server:~/ls# rmdir
/var/lib/lxd/storage-pools/default/containers/ajeter/
rmdir: failed to remove
'/var/lib/lxd/storage-pools/default/containers/ajeter/': Operation not
permitted

Usually this happens when you are trying to remove mount point.

And thak you again : It is a brtfs subvolume.



root@server:~/ls#
???




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Why I cannot remove this (emtpy) directory ?

2018-09-23 Thread Pierre Couderc
root@server:~/ls# ls -lha 
/var/lib/lxd/storage-pools/default/containers/ajeter/

total 0
drwx--x--x 1 root root  0 Sep 22 15:17 .
drwxr-xr-x 1 root root 12 Sep 23 07:09 ..
root@server:~/ls# rmdir 
/var/lib/lxd/storage-pools/default/containers/ajeter/
rmdir: failed to remove 
'/var/lib/lxd/storage-pools/default/containers/ajeter/': Operation not 
permitted

root@server:~/ls#

???

PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Error launching first container

2018-09-23 Thread Pierre Couderc
lxd just installed by apt  on a freshly installed bionic, and after lxd 
init :


lxc launch ubuntu:16.04 my-ubuntu
Creating my-ubuntu
Error: Failed container creation: Get 
https://cloud-images.ubuntu.com/releases/streams/v1/index.json: lookup 
cloud-images.ubuntu.com on 127.0.0.53:53: server misbehaving


I have checked  that the URL 
https://cloud-images.ubuntu.com/releases/streams/v1/index.json: faiils 
(404) but not 
https://cloud-images.ubuntu.com/releases/streams/v1/index.json (witohut 
last :)


How fo I create my 1rst container ?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc



On 08/30/2018 05:38 PM, Pierre Couderc wrote:
- About the problem "Instabiltie", I have found a mistake of mine that 
maybe may  explain : I had another release of lxd (dated 12 august) on 
the same computer, and I may  have started it by mistake ans this 
could explin the "instabilities". So let us wait and now that I have 
cleared the old release





The problem has come back. I try :

 lxd import debian
Error: Post http://unix.socket/internal/containers: EOF

and then I can no more "lxc ls" (it loops)...

May it be that by mistake I did key "lxc import debian" before "lxd 
import debian" ?


But all my containers are active while I do not reboot.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc
But I have built following the instructions of "Installing LXD from source" 
from github.com/lxc/lxd/The only point is that I did not find liblxc-dev in 
stretch repository.   Le jeudi 30 août 2018 à 17:17:00 UTC+2, Free Ekanayaka 
 a écrit :  
 
 It's not the "good one". You need to build it from source from:

github.com/CanonicalLtd/sqlite

or there debs here:

https://launchpad.net/~dqlite-maintainers/+archive/ubuntu/master


Pierre Couderc  writes:

> Maybe it is linked or not.
> On another computer, I try to compile on stretch, but I cannot install 
> liblixc-dev (does not iexist une stretch).
> I compile anyway and get  :
>   CCLD libdqlite.la
> /usr/bin/ld: cannot find -lsqlite3
> collect2: error: ld returned 1 exit status
> Makefile:812: recipe for target 'libdqlite.la' failed
> make[2]: *** [libdqlite.la] Error 1
> make[2]: Leaving directory '/root/go/deps/dqlite'
> Makefile:697: recipe for target 'all' failed
> make[1]: *** [all] Error 2
> make[1]: Leaving directory '/root/go/deps/dqlite'
> Makefile:30: recipe for target 'deps' failed
> make: *** [deps] Error 2
>
> So I apt install libsqlite3_dev and then deps build.
> But is it the "good" libsqlite3 ?
>    Le jeudi 30 août 2018 à 15:16:55 UTC+2, Pierre Couderc 
> a écrit :  
>  
>  Well, I prepare it and I send it you. First time I had removed the 
>containers this time I let them. Please consider them as private data
>    Le jeudi 30 août 2018 à 14:56:36 UTC+2, Free Ekanayaka 
> a écrit :  
>  
>  Hello,
>
> yes, I believe I understand. What's puzzling is that I should be able to
> reproduce your problem using database. Would you mind sending me again a
> tarball of the /var/lib/lxd/database directory of a LXD which is
> currently broken? Just to double check. I don't have any other idea atm.
>
> Pierre Couderc  writes:
>
>> When I start a new clean lxd instance, I can lxd init, launch first.
>> Then I try to work, it works, I success import from other lxd.
>> At some point, some lxc command fails, such as lxc copy (local).
>> Then nothing works more, any lxc command get soxket error.
>> If I reboot, "lxc ls" gives the same error and the messages that I have sent.
>> I hope I am clear...
>>
>>    Le jeudi 30 août 2018 à 14:07:19 UTC+2, Free Ekanayaka 
>> a écrit :  
>>  
>>  I have a few questions:
>>
>> 1) Does the failure happen when you start with a fresh lxd instance?
>>
>> 2) If the answer to 1) is "no", is there are repeatable process that you
>>   have that brings you from a fresh lxd instance to the point were it
>>   crashes with the failure you pasted?
>>
>> 3) Regardless of the answers to 1) and 2), does the failure happen
>>   consistently? I.e. does it happen every time you run "lxc ls".
>>
>> Free
>>
>> Pierre Couderc  writes:
>>
>>> I am with lasts releases from git. For dqlite, last log is:
>>>
>>> commit f160665d9e50e39d156591546732a2e0b3712f73
>>> Author: Free Ekanayaka 
>>> Date:   Mon Aug 20 19:04:10 2018 +0200
>>>
>>> Mmm, I can send you again my tarball but it will be the same as I did send 
>>> you before...
>>>  It seems th eproblem is linked with my computer... Maybe I could enavle 
>>> some traces on my computer ?
>>>
>>>
>>>
>>>    Le jeudi 30 août 2018 à 13:02:44 UTC+2, Free Ekanayaka 
>>> a écrit :  
>>>  
>>>  Hello,
>>>
>>> this seems the same failure you reported earlier (thread with subject
>>> "lxd refuses to start ...").
>>>
>>> When you sent me the database tarball last time, I didn't see any issue
>>> and I could not reproduce the failure. Can you please double check that
>>> your version of the dqlite C library is up to date (tag v0.2.2) and the
>>> go-dqlite git close under GOPATH actually points to the master version
>>> on github? Just run "git status" under 
>>> $GOPATH/github.com/CanonicalLtd/go-dqlite
>>> and compare it with github.
>>>
>>> If all your dependencies turn out to be up-to-date, you may want to
>>> again send me a tarball of your /var/lib/lxd/database directory, and
>>> I'll double check too.
>>>
>>> Free
>>>
>>> Pierre Couderc  writes:
>>>
>>>> Currently I heve many instabilities with lxd.
>>>> When I try to start it, I get :
>>>> nous@couderc:~$ export GOPATH=~/gonous@couderc:~$ sudo -E 
>>>> -sroot@couderc:~# echo 
>>>> $LD_LIBRARY_PATH/home/nous/go/deps/sqlite/

Re: [lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc
- About the problem "Instabiltie", I have found a mistake of mine that maybe 
may  explain : I had another release of lxd (dated 12 august) on the same 
computer, and I may  have started it by mistake ans this could explin the 
"instabilities". So let us wait and now that I have cleared the old release


   Le jeudi 30 août 2018 à 17:17:00 UTC+2, Free Ekanayaka 
 a écrit :  
 
 It's not the "good one". You need to build it from source from:

github.com/CanonicalLtd/sqlite

or there debs here:

https://launchpad.net/~dqlite-maintainers/+archive/ubuntu/master


Pierre Couderc  writes:

> Maybe it is linked or not.
> On another computer, I try to compile on stretch, but I cannot install 
> liblixc-dev (does not iexist une stretch).
> I compile anyway and get  :
>   CCLD libdqlite.la
> /usr/bin/ld: cannot find -lsqlite3
> collect2: error: ld returned 1 exit status
> Makefile:812: recipe for target 'libdqlite.la' failed
> make[2]: *** [libdqlite.la] Error 1
> make[2]: Leaving directory '/root/go/deps/dqlite'
> Makefile:697: recipe for target 'all' failed
> make[1]: *** [all] Error 2
> make[1]: Leaving directory '/root/go/deps/dqlite'
> Makefile:30: recipe for target 'deps' failed
> make: *** [deps] Error 2
>
> So I apt install libsqlite3_dev and then deps build.
> But is it the "good" libsqlite3 ?
>    Le jeudi 30 août 2018 à 15:16:55 UTC+2, Pierre Couderc 
> a écrit :  
>  
>  Well, I prepare it and I send it you. First time I had removed the 
>containers this time I let them. Please consider them as private data
>    Le jeudi 30 août 2018 à 14:56:36 UTC+2, Free Ekanayaka 
> a écrit :  
>  
>  Hello,
>
> yes, I believe I understand. What's puzzling is that I should be able to
> reproduce your problem using database. Would you mind sending me again a
> tarball of the /var/lib/lxd/database directory of a LXD which is
> currently broken? Just to double check. I don't have any other idea atm.
>
> Pierre Couderc  writes:
>
>> When I start a new clean lxd instance, I can lxd init, launch first.
>> Then I try to work, it works, I success import from other lxd.
>> At some point, some lxc command fails, such as lxc copy (local).
>> Then nothing works more, any lxc command get soxket error.
>> If I reboot, "lxc ls" gives the same error and the messages that I have sent.
>> I hope I am clear...
>>
>>    Le jeudi 30 août 2018 à 14:07:19 UTC+2, Free Ekanayaka 
>> a écrit :  
>>  
>>  I have a few questions:
>>
>> 1) Does the failure happen when you start with a fresh lxd instance?
>>
>> 2) If the answer to 1) is "no", is there are repeatable process that you
>>   have that brings you from a fresh lxd instance to the point were it
>>   crashes with the failure you pasted?
>>
>> 3) Regardless of the answers to 1) and 2), does the failure happen
>>   consistently? I.e. does it happen every time you run "lxc ls".
>>
>> Free
>>
>> Pierre Couderc  writes:
>>
>>> I am with lasts releases from git. For dqlite, last log is:
>>>
>>> commit f160665d9e50e39d156591546732a2e0b3712f73
>>> Author: Free Ekanayaka 
>>> Date:   Mon Aug 20 19:04:10 2018 +0200
>>>
>>> Mmm, I can send you again my tarball but it will be the same as I did send 
>>> you before...
>>>  It seems th eproblem is linked with my computer... Maybe I could enavle 
>>> some traces on my computer ?
>>>
>>>
>>>
>>>    Le jeudi 30 août 2018 à 13:02:44 UTC+2, Free Ekanayaka 
>>> a écrit :  
>>>  
>>>  Hello,
>>>
>>> this seems the same failure you reported earlier (thread with subject
>>> "lxd refuses to start ...").
>>>
>>> When you sent me the database tarball last time, I didn't see any issue
>>> and I could not reproduce the failure. Can you please double check that
>>> your version of the dqlite C library is up to date (tag v0.2.2) and the
>>> go-dqlite git close under GOPATH actually points to the master version
>>> on github? Just run "git status" under 
>>> $GOPATH/github.com/CanonicalLtd/go-dqlite
>>> and compare it with github.
>>>
>>> If all your dependencies turn out to be up-to-date, you may want to
>>> again send me a tarball of your /var/lib/lxd/database directory, and
>>> I'll double check too.
>>>
>>> Free
>>>
>>> Pierre Couderc  writes:
>>>
>>>> Currently I heve many instabilities with lxd.
>>>> When I try to start it, I get :
>>>> nous@couderc

Re: [lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc


Maybe it is linked or not.
On another computer, I try to compile on stretch, but I cannot install 
liblixc-dev (does not iexist une stretch).
I compile anyway and get  :
  CCLD libdqlite.la
/usr/bin/ld: cannot find -lsqlite3
collect2: error: ld returned 1 exit status
Makefile:812: recipe for target 'libdqlite.la' failed
make[2]: *** [libdqlite.la] Error 1
make[2]: Leaving directory '/root/go/deps/dqlite'
Makefile:697: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory '/root/go/deps/dqlite'
Makefile:30: recipe for target 'deps' failed
make: *** [deps] Error 2

So I apt install libsqlite3_dev and then deps build.
But is it the "good" libsqlite3 ?
   Le jeudi 30 août 2018 à 15:16:55 UTC+2, Pierre Couderc  
a écrit :  
 
 Well, I prepare it and I send it you. First time I had removed the containers 
this time I let them. Please consider them as private data
   Le jeudi 30 août 2018 à 14:56:36 UTC+2, Free Ekanayaka 
 a écrit :  
 
 Hello,

yes, I believe I understand. What's puzzling is that I should be able to
reproduce your problem using database. Would you mind sending me again a
tarball of the /var/lib/lxd/database directory of a LXD which is
currently broken? Just to double check. I don't have any other idea atm.

Pierre Couderc  writes:

> When I start a new clean lxd instance, I can lxd init, launch first.
> Then I try to work, it works, I success import from other lxd.
> At some point, some lxc command fails, such as lxc copy (local).
> Then nothing works more, any lxc command get soxket error.
> If I reboot, "lxc ls" gives the same error and the messages that I have sent.
> I hope I am clear...
>
>    Le jeudi 30 août 2018 à 14:07:19 UTC+2, Free Ekanayaka 
> a écrit :  
>  
>  I have a few questions:
>
> 1) Does the failure happen when you start with a fresh lxd instance?
>
> 2) If the answer to 1) is "no", is there are repeatable process that you
>   have that brings you from a fresh lxd instance to the point were it
>   crashes with the failure you pasted?
>
> 3) Regardless of the answers to 1) and 2), does the failure happen
>   consistently? I.e. does it happen every time you run "lxc ls".
>
> Free
>
> Pierre Couderc  writes:
>
>> I am with lasts releases from git. For dqlite, last log is:
>>
>> commit f160665d9e50e39d156591546732a2e0b3712f73
>> Author: Free Ekanayaka 
>> Date:   Mon Aug 20 19:04:10 2018 +0200
>>
>> Mmm, I can send you again my tarball but it will be the same as I did send 
>> you before...
>>  It seems th eproblem is linked with my computer... Maybe I could enavle 
>> some traces on my computer ?
>>
>>
>>
>>    Le jeudi 30 août 2018 à 13:02:44 UTC+2, Free Ekanayaka 
>> a écrit :  
>>  
>>  Hello,
>>
>> this seems the same failure you reported earlier (thread with subject
>> "lxd refuses to start ...").
>>
>> When you sent me the database tarball last time, I didn't see any issue
>> and I could not reproduce the failure. Can you please double check that
>> your version of the dqlite C library is up to date (tag v0.2.2) and the
>> go-dqlite git close under GOPATH actually points to the master version
>> on github? Just run "git status" under 
>> $GOPATH/github.com/CanonicalLtd/go-dqlite
>> and compare it with github.
>>
>> If all your dependencies turn out to be up-to-date, you may want to
>> again send me a tarball of your /var/lib/lxd/database directory, and
>> I'll double check too.
>>
>> Free
>>
>> Pierre Couderc  writes:
>>
>>> Currently I heve many instabilities with lxd.
>>> When I try to start it, I get :
>>> nous@couderc:~$ export GOPATH=~/gonous@couderc:~$ sudo -E -sroot@couderc:~# 
>>> echo 
>>> $LD_LIBRARY_PATH/home/nous/go/deps/sqlite/.libs/:/home/nous/go/deps/dqlite/.libs/root@couderc:~#
>>>  cd go/binroot@couderc:~/go/bin# lsdeps  fuidshift  lxc  lxc-to-lxd  lxd  
>>> lxd-benchmark  lxd-p2c  macaroon-identityroot@couderc:~/go/bin# nohup lxd 
>>> --group sudo &[1] 1202root@couderc:~/go/bin# nohup: les entrées sont 
>>> ignorées et la sortie est ajoutée à 'nohup.out'lsdeps  fuidshift  lxc  
>>> lxc-to-lxd  lxd  lxd-benchmark  lxd-p2c  macaroon-identity  nohup.out[1]+  
>>> Termine 2               nohup lxd --group sudoroot@couderc:~/go/bin# lxc 
>>> lsError: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: 
>>> connect: connection refusedroot@couderc:~/go/bin# cat nohup.outlvl=warn 
>>> msg="AppArmor support has been disabled because of lack of kernel support" 
>>> t=2018-08-30T12:23:21+0200lvl=warn msg="CGroup

Re: [lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc
When I start a new clean lxd instance, I can lxd init, launch first.
Then I try to work, it works, I success import from other lxd.
At some point, some lxc command fails, such as lxc copy (local).
Then nothing works more, any lxc command get soxket error.
If I reboot, "lxc ls" gives the same error and the messages that I have sent.
I hope I am clear...

   Le jeudi 30 août 2018 à 14:07:19 UTC+2, Free Ekanayaka 
 a écrit :  
 
 I have a few questions:

1) Does the failure happen when you start with a fresh lxd instance?

2) If the answer to 1) is "no", is there are repeatable process that you
  have that brings you from a fresh lxd instance to the point were it
  crashes with the failure you pasted?

3) Regardless of the answers to 1) and 2), does the failure happen
  consistently? I.e. does it happen every time you run "lxc ls".

Free

Pierre Couderc  writes:

> I am with lasts releases from git. For dqlite, last log is:
>
> commit f160665d9e50e39d156591546732a2e0b3712f73
> Author: Free Ekanayaka 
> Date:   Mon Aug 20 19:04:10 2018 +0200
>
> Mmm, I can send you again my tarball but it will be the same as I did send 
> you before...
>  It seems th eproblem is linked with my computer... Maybe I could enavle some 
> traces on my computer ?
>
>
>
>    Le jeudi 30 août 2018 à 13:02:44 UTC+2, Free Ekanayaka 
> a écrit :  
>  
>  Hello,
>
> this seems the same failure you reported earlier (thread with subject
> "lxd refuses to start ...").
>
> When you sent me the database tarball last time, I didn't see any issue
> and I could not reproduce the failure. Can you please double check that
> your version of the dqlite C library is up to date (tag v0.2.2) and the
> go-dqlite git close under GOPATH actually points to the master version
> on github? Just run "git status" under 
> $GOPATH/github.com/CanonicalLtd/go-dqlite
> and compare it with github.
>
> If all your dependencies turn out to be up-to-date, you may want to
> again send me a tarball of your /var/lib/lxd/database directory, and
> I'll double check too.
>
> Free
>
> Pierre Couderc  writes:
>
>> Currently I heve many instabilities with lxd.
>> When I try to start it, I get :
>> nous@couderc:~$ export GOPATH=~/gonous@couderc:~$ sudo -E -sroot@couderc:~# 
>> echo 
>> $LD_LIBRARY_PATH/home/nous/go/deps/sqlite/.libs/:/home/nous/go/deps/dqlite/.libs/root@couderc:~#
>>  cd go/binroot@couderc:~/go/bin# lsdeps  fuidshift  lxc  lxc-to-lxd  lxd  
>> lxd-benchmark  lxd-p2c  macaroon-identityroot@couderc:~/go/bin# nohup lxd 
>> --group sudo &[1] 1202root@couderc:~/go/bin# nohup: les entrées sont 
>> ignorées et la sortie est ajoutée à 'nohup.out'lsdeps  fuidshift  lxc  
>> lxc-to-lxd  lxd  lxd-benchmark  lxd-p2c  macaroon-identity  nohup.out[1]+  
>> Termine 2               nohup lxd --group sudoroot@couderc:~/go/bin# lxc 
>> lsError: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: 
>> connect: connection refusedroot@couderc:~/go/bin# cat nohup.outlvl=warn 
>> msg="AppArmor support has been disabled because of lack of kernel support" 
>> t=2018-08-30T12:23:21+0200lvl=warn msg="CGroup memory swap accounting is 
>> disabled, swap limits will be ignored." t=2018-08-30T12:23:21+0200panic: 
>> unknown data type
>> goroutine 1 
>> [running]:github.com/CanonicalLtd/go-dqlite/internal/client.(*Rows).Next(0xc42000d660,
>>  0xc4203ec6c0, 0x3, 0x3, 0xc420044070, 0xc4204b8bd0)        
>> /home/nous/go/src/github.com/CanonicalLtd/go-dqlite/internal/client/message.go:549
>>  +0x914github.com/CanonicalLtd/go-dqlite.(*Rows).Next(0xc42000d660, 
>> 0xc4203ec6c0, 0x3, 0x3, 0xf24e40, 0xc4200dd268)        
>> /home/nous/go/src/github.com/CanonicalLtd/go-dqlite/driver.go:515 
>> +0x4bdatabase/sql.(*Rows).nextLocked(0xc4201ecc00, 0xc42024)        
>> /usr/lib/go-1.10/src/database/sql/sql.go:2622 
>> +0xc4database/sql.(*Rows).Next.func1()        
>> /usr/lib/go-1.10/src/database/sql/sql.go:2600 
>> +0x3cdatabase/sql.withLock(0x11fa640, 0xc4201ecc30, 0xc4204b8c88)        
>> /usr/lib/go-1.10/src/database/sql/sql.go:3032 
>> +0x63database/sql.(*Rows).Next(0xc4201ecc00, 0xc4203ed080)        
>> /usr/lib/go-1.10/src/database/sql/sql.go:2599 
>> +0x7agithub.com/lxc/lxd/lxd/db/query.SelectObjects(0xc4201eca00, 
>> 0xc4203e9c70, 0xc4203ba000, 0xc0, 0xc4203e9b70, 0x1, 0x1, 0x0, 0x0)        
>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/objects.go:18 
>> +0xdagithub.com/lxc/lxd/lxd/db.(*ClusterTx).containerArgsList(0xc4203e9b30, 
>> 0x1201201, 0xc4200ba030, 0x0, 0xc420270101, 0xc4201eca00, 0x0)        
>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:4

Re: [lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc
I am with lasts releases from git. For dqlite, last log is:

commit f160665d9e50e39d156591546732a2e0b3712f73
Author: Free Ekanayaka 
Date:   Mon Aug 20 19:04:10 2018 +0200

Mmm, I can send you again my tarball but it will be the same as I did send you 
before...
 It seems th eproblem is linked with my computer... Maybe I could enavle some 
traces on my computer ?



   Le jeudi 30 août 2018 à 13:02:44 UTC+2, Free Ekanayaka 
 a écrit :  
 
 Hello,

this seems the same failure you reported earlier (thread with subject
"lxd refuses to start ...").

When you sent me the database tarball last time, I didn't see any issue
and I could not reproduce the failure. Can you please double check that
your version of the dqlite C library is up to date (tag v0.2.2) and the
go-dqlite git close under GOPATH actually points to the master version
on github? Just run "git status" under $GOPATH/github.com/CanonicalLtd/go-dqlite
and compare it with github.

If all your dependencies turn out to be up-to-date, you may want to
again send me a tarball of your /var/lib/lxd/database directory, and
I'll double check too.

Free

Pierre Couderc  writes:

> Currently I heve many instabilities with lxd.
> When I try to start it, I get :
> nous@couderc:~$ export GOPATH=~/gonous@couderc:~$ sudo -E -sroot@couderc:~# 
> echo 
> $LD_LIBRARY_PATH/home/nous/go/deps/sqlite/.libs/:/home/nous/go/deps/dqlite/.libs/root@couderc:~#
>  cd go/binroot@couderc:~/go/bin# lsdeps  fuidshift  lxc  lxc-to-lxd  lxd  
> lxd-benchmark  lxd-p2c  macaroon-identityroot@couderc:~/go/bin# nohup lxd 
> --group sudo &[1] 1202root@couderc:~/go/bin# nohup: les entrées sont ignorées 
> et la sortie est ajoutée à 'nohup.out'lsdeps  fuidshift  lxc  lxc-to-lxd  lxd 
>  lxd-benchmark  lxd-p2c  macaroon-identity  nohup.out[1]+  Termine 2          
>      nohup lxd --group sudoroot@couderc:~/go/bin# lxc lsError: Get 
> http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connect: 
> connection refusedroot@couderc:~/go/bin# cat nohup.outlvl=warn msg="AppArmor 
> support has been disabled because of lack of kernel support" 
> t=2018-08-30T12:23:21+0200lvl=warn msg="CGroup memory swap accounting is 
> disabled, swap limits will be ignored." t=2018-08-30T12:23:21+0200panic: 
> unknown data type
> goroutine 1 
> [running]:github.com/CanonicalLtd/go-dqlite/internal/client.(*Rows).Next(0xc42000d660,
>  0xc4203ec6c0, 0x3, 0x3, 0xc420044070, 0xc4204b8bd0)        
> /home/nous/go/src/github.com/CanonicalLtd/go-dqlite/internal/client/message.go:549
>  +0x914github.com/CanonicalLtd/go-dqlite.(*Rows).Next(0xc42000d660, 
> 0xc4203ec6c0, 0x3, 0x3, 0xf24e40, 0xc4200dd268)        
> /home/nous/go/src/github.com/CanonicalLtd/go-dqlite/driver.go:515 
> +0x4bdatabase/sql.(*Rows).nextLocked(0xc4201ecc00, 0xc42024)        
> /usr/lib/go-1.10/src/database/sql/sql.go:2622 
> +0xc4database/sql.(*Rows).Next.func1()        
> /usr/lib/go-1.10/src/database/sql/sql.go:2600 
> +0x3cdatabase/sql.withLock(0x11fa640, 0xc4201ecc30, 0xc4204b8c88)        
> /usr/lib/go-1.10/src/database/sql/sql.go:3032 
> +0x63database/sql.(*Rows).Next(0xc4201ecc00, 0xc4203ed080)        
> /usr/lib/go-1.10/src/database/sql/sql.go:2599 
> +0x7agithub.com/lxc/lxd/lxd/db/query.SelectObjects(0xc4201eca00, 
> 0xc4203e9c70, 0xc4203ba000, 0xc0, 0xc4203e9b70, 0x1, 0x1, 0x0, 0x0)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/objects.go:18 
> +0xdagithub.com/lxc/lxd/lxd/db.(*ClusterTx).containerArgsList(0xc4203e9b30, 
> 0x1201201, 0xc4200ba030, 0x0, 0xc420270101, 0xc4201eca00, 0x0)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:442 
> +0x5a7github.com/lxc/lxd/lxd/db.(*ClusterTx).ContainerArgsNodeList(0xc4203e9b30,
>  0x0, 0x0, 0xc4204b90a8, 0x771d7c, 0xc420018dc0)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:347 
> +0x30main.containerLoadNodeAll.func1(0xc4203e9b30, 0x0, 0x0)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1200 
> +0x38github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1.1(0xc4201eca00, 
> 0xc4201eca00, 0x0)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:309 
> +0x42github.com/lxc/lxd/lxd/db/query.Transaction(0xc420018dc0, 0xc4204b9130, 
> 0x7, 0x8)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/transaction.go:17 
> +0x5agithub.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1(0x7f3554e01000, 
> 0x0)        /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:307 
> +0x55github.com/lxc/lxd/lxd/db/query.Retry(0xc4204b91e0, 0xc4203e9b30, 
> 0x434b69)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/retry.go:20 
> +0xaegithub.com/lxc/lxd/lxd/db.(*Cluster).transaction(0xc42026ca50, 
> 0xc4204b9290, 0xc42026ca60, 0xc420272b60)        
> /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:306 
> +0x

[lxc-users] Instabilties

2018-08-30 Thread Pierre Couderc
Currently I heve many instabilities with lxd.
When I try to start it, I get :
nous@couderc:~$ export GOPATH=~/gonous@couderc:~$ sudo -E -sroot@couderc:~# 
echo 
$LD_LIBRARY_PATH/home/nous/go/deps/sqlite/.libs/:/home/nous/go/deps/dqlite/.libs/root@couderc:~#
 cd go/binroot@couderc:~/go/bin# lsdeps  fuidshift  lxc  lxc-to-lxd  lxd  
lxd-benchmark  lxd-p2c  macaroon-identityroot@couderc:~/go/bin# nohup lxd 
--group sudo &[1] 1202root@couderc:~/go/bin# nohup: les entrées sont ignorées 
et la sortie est ajoutée à 'nohup.out'lsdeps  fuidshift  lxc  lxc-to-lxd  lxd  
lxd-benchmark  lxd-p2c  macaroon-identity  nohup.out[1]+  Termine 2             
  nohup lxd --group sudoroot@couderc:~/go/bin# lxc lsError: Get 
http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: connect: connection 
refusedroot@couderc:~/go/bin# cat nohup.outlvl=warn msg="AppArmor support has 
been disabled because of lack of kernel support" 
t=2018-08-30T12:23:21+0200lvl=warn msg="CGroup memory swap accounting is 
disabled, swap limits will be ignored." t=2018-08-30T12:23:21+0200panic: 
unknown data type
goroutine 1 
[running]:github.com/CanonicalLtd/go-dqlite/internal/client.(*Rows).Next(0xc42000d660,
 0xc4203ec6c0, 0x3, 0x3, 0xc420044070, 0xc4204b8bd0)        
/home/nous/go/src/github.com/CanonicalLtd/go-dqlite/internal/client/message.go:549
 +0x914github.com/CanonicalLtd/go-dqlite.(*Rows).Next(0xc42000d660, 
0xc4203ec6c0, 0x3, 0x3, 0xf24e40, 0xc4200dd268)        
/home/nous/go/src/github.com/CanonicalLtd/go-dqlite/driver.go:515 
+0x4bdatabase/sql.(*Rows).nextLocked(0xc4201ecc00, 0xc42024)        
/usr/lib/go-1.10/src/database/sql/sql.go:2622 
+0xc4database/sql.(*Rows).Next.func1()        
/usr/lib/go-1.10/src/database/sql/sql.go:2600 
+0x3cdatabase/sql.withLock(0x11fa640, 0xc4201ecc30, 0xc4204b8c88)        
/usr/lib/go-1.10/src/database/sql/sql.go:3032 
+0x63database/sql.(*Rows).Next(0xc4201ecc00, 0xc4203ed080)        
/usr/lib/go-1.10/src/database/sql/sql.go:2599 
+0x7agithub.com/lxc/lxd/lxd/db/query.SelectObjects(0xc4201eca00, 0xc4203e9c70, 
0xc4203ba000, 0xc0, 0xc4203e9b70, 0x1, 0x1, 0x0, 0x0)        
/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/objects.go:18 
+0xdagithub.com/lxc/lxd/lxd/db.(*ClusterTx).containerArgsList(0xc4203e9b30, 
0x1201201, 0xc4200ba030, 0x0, 0xc420270101, 0xc4201eca00, 0x0)        
/home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:442 
+0x5a7github.com/lxc/lxd/lxd/db.(*ClusterTx).ContainerArgsNodeList(0xc4203e9b30,
 0x0, 0x0, 0xc4204b90a8, 0x771d7c, 0xc420018dc0)        
/home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:347 
+0x30main.containerLoadNodeAll.func1(0xc4203e9b30, 0x0, 0x0)        
/home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1200 
+0x38github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1.1(0xc4201eca00, 
0xc4201eca00, 0x0)        /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:309 
+0x42github.com/lxc/lxd/lxd/db/query.Transaction(0xc420018dc0, 0xc4204b9130, 
0x7, 0x8)        
/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/transaction.go:17 
+0x5agithub.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1(0x7f3554e01000, 
0x0)        /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:307 
+0x55github.com/lxc/lxd/lxd/db/query.Retry(0xc4204b91e0, 0xc4203e9b30, 
0x434b69)        /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/retry.go:20 
+0xaegithub.com/lxc/lxd/lxd/db.(*Cluster).transaction(0xc42026ca50, 
0xc4204b9290, 0xc42026ca60, 0xc420272b60)        
/home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:306 
+0x6dgithub.com/lxc/lxd/lxd/db.(*Cluster).Transaction(0xc42026ca50, 
0xc4204b9290, 0x0, 0x0)        
/home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:270 
+0x80main.containerLoadNodeAll(0xc4203ec420, 0x1902720, 0x4, 0xc42003e270, 
0x2b, 0x1928910)        
/home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1198 
+0x67main.deviceInotifyDirRescan(0xc4203ec420)        
/home/nous/go/src/github.com/lxc/lxd/lxd/devices.go:1844 
+0x43main.(*Daemon).init(0xc4202c2750, 0xc4202a78f0, 0x40e446)        
/home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:628 
+0x13c4main.(*Daemon).Init(0xc4202c2750, 0xc4202c2750, 0xc420092a80)        
/home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:363 
+0x2fmain.(*cmdDaemon).Run(0xc420272980, 0xc4202b8500, 0xc420272a80, 0x0, 0x2, 
0x0, 0x0)        /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:61 
+0x266main.(*cmdDaemon).Run-fm(0xc4202b8500, 0xc420272a80, 0x0, 0x2, 0x0, 0x0)  
      /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:36 
+0x52github.com/spf13/cobra.(*Command).execute(0xc4202b8500, 0xc4200a4160, 0x2, 
0x2, 0xc4202b8500, 0xc4200a4160)        
/home/nous/go/src/github.com/spf13/cobra/command.go:762 
+0x468github.com/spf13/cobra.(*Command).ExecuteC(0xc4202b8500, 0x0, 
0xc4202c0c80, 0xc4202c0c80)        
/home/nous/go/src/github.com/spf13/cobra/command.go:852 
+0x30agithub.com/spf13/cobra.(*Command).Execute(0xc4202b8500, 0xc4202a7e00, 
0x1)        /home/nous/go/src/github.com/spf13/cobra/command.go:800 

Re: [lxc-users] lxd refuses to start ...

2018-08-26 Thread Pierre Couderc

Sure, I send it you by separate mail


On 08/26/2018 11:01 AM, Free Ekanayaka wrote:

Hello,

does this happen consistently? If so, could you please make a tarball of
the /database directory and send it to me? It might
be some bug in database code.

Free

Pierre Couderc  writes:


...and I am lost in the messages :


nous@couderc:~$ export GOPATH=~/go
...

nous@couderc:~/go/bin$ sudo -E -s

root@couderc:~/go/bin# nohup lxd --group sudo&
[1] 1228
root@couderc:~/go/bin# nohup: les entrées sont ignorées et la sortie est
ajoutée à 'nohup.out'
ls
deps  fuidshift  lxc  lxc-to-lxd  lxd  lxd-benchmark  lxd-p2c
macaroon-identity  nohup.out
[1]+  Termine 2   nohup lxd --group sudo
root@couderc:~/go/bin# lxc ls
Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket:
connect: connection refused
root@couderc:~/go/bin# cat nohup.out
lvl=warn msg="AppArmor support has been disabled because of lack of
kernel support" t=2018-08-26T08:14:44+0200
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits
will be ignored." t=2018-08-26T08:14:44+0200
panic: unknown data type

goroutine 1 [running]:
github.com/CanonicalLtd/go-dqlite/internal/client.(*Rows).Next(0xc42027a7e0,
0xc4202db620, 0x3, 0x3, 0xc420044070, 0xc42035cbd0)
/home/nous/go/src/github.com/CanonicalLtd/go-dqlite/internal/client/message.go:549
+0x914
github.com/CanonicalLtd/go-dqlite.(*Rows).Next(0xc42027a7e0,
0xc4202db620, 0x3, 0x3, 0xf24e40, 0xc42040b1a0)
/home/nous/go/src/github.com/CanonicalLtd/go-dqlite/driver.go:515 +0x4b
database/sql.(*Rows).nextLocked(0xc4202e5700, 0xc42025)
      /usr/lib/go-1.10/src/database/sql/sql.go:2622 +0xc4
database/sql.(*Rows).Next.func1()
      /usr/lib/go-1.10/src/database/sql/sql.go:2600 +0x3c
database/sql.withLock(0x11fa640, 0xc4202e5730, 0xc42035cc88)
      /usr/lib/go-1.10/src/database/sql/sql.go:3032 +0x63
database/sql.(*Rows).Next(0xc4202e5700, 0xc4202dbef0)
      /usr/lib/go-1.10/src/database/sql/sql.go:2599 +0x7a
github.com/lxc/lxd/lxd/db/query.SelectObjects(0xc4202e5500,
0xc4204f26e0, 0xc4204f80c0, 0xc0, 0xc420430b70, 0x1, 0x1, 0x0, 0x0)
/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/objects.go:18 +0xda
github.com/lxc/lxd/lxd/db.(*ClusterTx).containerArgsList(0xc4204f25e0,
0x1201201, 0xc4200ba030, 0x0, 0xc420278001, 0xc4202e5500, 0x0)
/home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:442 +0x5a7
github.com/lxc/lxd/lxd/db.(*ClusterTx).ContainerArgsNodeList(0xc4204f25e0,
0x0, 0x0, 0xc42035d0a8, 0x771d7c, 0xc420566e60)
/home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:347 +0x30
main.containerLoadNodeAll.func1(0xc4204f25e0, 0x0, 0x0)
      /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1200 +0x38
github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1.1(0xc4202e5500,
0xc4202e5500, 0x0)
      /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:309 +0x42
github.com/lxc/lxd/lxd/db/query.Transaction(0xc420566e60, 0xc42035d130,
0x7, 0x8)
/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/transaction.go:17 +0x5a
github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1(0x7f201e9496c8, 0x0)
      /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:307 +0x55
github.com/lxc/lxd/lxd/db/query.Retry(0xc42035d1e0, 0xc4204f25e0, 0x434b69)
/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/retry.go:20 +0xae
github.com/lxc/lxd/lxd/db.(*Cluster).transaction(0xc4202f3980,
0xc42035d290, 0xc4202f3990, 0xc42027ab40)
      /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:306 +0x6d
github.com/lxc/lxd/lxd/db.(*Cluster).Transaction(0xc4202f3980,
0xc42035d290, 0x0, 0x0)
      /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:270 +0x80
main.containerLoadNodeAll(0xc4204f0810, 0x1902720, 0x4, 0xc4200b6f00,
0x2b, 0x1928910)
      /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1198 +0x67
main.deviceInotifyDirRescan(0xc4204f0810)
      /home/nous/go/src/github.com/lxc/lxd/lxd/devices.go:1844 +0x43
main.(*Daemon).init(0xc4202caa90, 0xc4202af8f0, 0x40e446)
      /home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:628 +0x13c4
main.(*Daemon).Init(0xc4202caa90, 0xc4202caa90, 0xc420092a80)
      /home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:363 +0x2f
main.(*cmdDaemon).Run(0xc42027a960, 0xc4202c0500, 0xc42027aa60, 0x0,
0x2, 0x0, 0x0)
      /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:61 +0x266
main.(*cmdDaemon).Run-fm(0xc4202c0500, 0xc42027aa60, 0x0, 0x2, 0x0, 0x0)
      /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:36 +0x52
github.com/spf13/cobra.(*Command).execute(0xc4202c0500, 0xc4200a4160,
0x2, 0x2, 0xc4202c0500, 0xc4200a4160)
      /home/nous/go/src/github.com/spf13/cobra/command.go:762 +0x468
github.com/spf13/cobra.(*Command).ExecuteC(0xc4202c0500, 0x0,
0xc4202c8c80, 0xc4202c8c80)
      /home/nous/go/src/github.com/spf13/cobra/command.go:852 +0x30a
github.com/spf13/cobra.(*Command).Execute(0xc4202c0500, 0xc4202afe00, 0x1)
      /home/nous/go/src/github.com/spf13/co

[lxc-users] lxd refuses to start ...

2018-08-26 Thread Pierre Couderc

...and I am lost in the messages :


nous@couderc:~$ export GOPATH=~/go
...

nous@couderc:~/go/bin$ sudo -E -s

root@couderc:~/go/bin# nohup lxd --group sudo&
[1] 1228
root@couderc:~/go/bin# nohup: les entrées sont ignorées et la sortie est 
ajoutée à 'nohup.out'

ls
deps  fuidshift  lxc  lxc-to-lxd  lxd  lxd-benchmark  lxd-p2c 
macaroon-identity  nohup.out

[1]+  Termine 2   nohup lxd --group sudo
root@couderc:~/go/bin# lxc ls
Error: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: 
connect: connection refused

root@couderc:~/go/bin# cat nohup.out
lvl=warn msg="AppArmor support has been disabled because of lack of 
kernel support" t=2018-08-26T08:14:44+0200
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits 
will be ignored." t=2018-08-26T08:14:44+0200

panic: unknown data type

goroutine 1 [running]:
github.com/CanonicalLtd/go-dqlite/internal/client.(*Rows).Next(0xc42027a7e0, 
0xc4202db620, 0x3, 0x3, 0xc420044070, 0xc42035cbd0)
/home/nous/go/src/github.com/CanonicalLtd/go-dqlite/internal/client/message.go:549 
+0x914
github.com/CanonicalLtd/go-dqlite.(*Rows).Next(0xc42027a7e0, 
0xc4202db620, 0x3, 0x3, 0xf24e40, 0xc42040b1a0)

/home/nous/go/src/github.com/CanonicalLtd/go-dqlite/driver.go:515 +0x4b
database/sql.(*Rows).nextLocked(0xc4202e5700, 0xc42025)
    /usr/lib/go-1.10/src/database/sql/sql.go:2622 +0xc4
database/sql.(*Rows).Next.func1()
    /usr/lib/go-1.10/src/database/sql/sql.go:2600 +0x3c
database/sql.withLock(0x11fa640, 0xc4202e5730, 0xc42035cc88)
    /usr/lib/go-1.10/src/database/sql/sql.go:3032 +0x63
database/sql.(*Rows).Next(0xc4202e5700, 0xc4202dbef0)
    /usr/lib/go-1.10/src/database/sql/sql.go:2599 +0x7a
github.com/lxc/lxd/lxd/db/query.SelectObjects(0xc4202e5500, 
0xc4204f26e0, 0xc4204f80c0, 0xc0, 0xc420430b70, 0x1, 0x1, 0x0, 0x0)

/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/objects.go:18 +0xda
github.com/lxc/lxd/lxd/db.(*ClusterTx).containerArgsList(0xc4204f25e0, 
0x1201201, 0xc4200ba030, 0x0, 0xc420278001, 0xc4202e5500, 0x0)

/home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:442 +0x5a7
github.com/lxc/lxd/lxd/db.(*ClusterTx).ContainerArgsNodeList(0xc4204f25e0, 
0x0, 0x0, 0xc42035d0a8, 0x771d7c, 0xc420566e60)

/home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:347 +0x30
main.containerLoadNodeAll.func1(0xc4204f25e0, 0x0, 0x0)
    /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1200 +0x38
github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1.1(0xc4202e5500, 
0xc4202e5500, 0x0)

    /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:309 +0x42
github.com/lxc/lxd/lxd/db/query.Transaction(0xc420566e60, 0xc42035d130, 
0x7, 0x8)

/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/transaction.go:17 +0x5a
github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1(0x7f201e9496c8, 0x0)
    /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:307 +0x55
github.com/lxc/lxd/lxd/db/query.Retry(0xc42035d1e0, 0xc4204f25e0, 0x434b69)
/home/nous/go/src/github.com/lxc/lxd/lxd/db/query/retry.go:20 +0xae
github.com/lxc/lxd/lxd/db.(*Cluster).transaction(0xc4202f3980, 
0xc42035d290, 0xc4202f3990, 0xc42027ab40)

    /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:306 +0x6d
github.com/lxc/lxd/lxd/db.(*Cluster).Transaction(0xc4202f3980, 
0xc42035d290, 0x0, 0x0)

    /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:270 +0x80
main.containerLoadNodeAll(0xc4204f0810, 0x1902720, 0x4, 0xc4200b6f00, 
0x2b, 0x1928910)

    /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1198 +0x67
main.deviceInotifyDirRescan(0xc4204f0810)
    /home/nous/go/src/github.com/lxc/lxd/lxd/devices.go:1844 +0x43
main.(*Daemon).init(0xc4202caa90, 0xc4202af8f0, 0x40e446)
    /home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:628 +0x13c4
main.(*Daemon).Init(0xc4202caa90, 0xc4202caa90, 0xc420092a80)
    /home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:363 +0x2f
main.(*cmdDaemon).Run(0xc42027a960, 0xc4202c0500, 0xc42027aa60, 0x0, 
0x2, 0x0, 0x0)

    /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:61 +0x266
main.(*cmdDaemon).Run-fm(0xc4202c0500, 0xc42027aa60, 0x0, 0x2, 0x0, 0x0)
    /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:36 +0x52
github.com/spf13/cobra.(*Command).execute(0xc4202c0500, 0xc4200a4160, 
0x2, 0x2, 0xc4202c0500, 0xc4200a4160)

    /home/nous/go/src/github.com/spf13/cobra/command.go:762 +0x468
github.com/spf13/cobra.(*Command).ExecuteC(0xc4202c0500, 0x0, 
0xc4202c8c80, 0xc4202c8c80)

    /home/nous/go/src/github.com/spf13/cobra/command.go:852 +0x30a
github.com/spf13/cobra.(*Command).Execute(0xc4202c0500, 0xc4202afe00, 0x1)
    /home/nous/go/src/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
    /home/nous/go/src/github.com/lxc/lxd/lxd/main.go:164 +0xea3
root@couderc:~/go/bin#
root@couderc:~/go/bin# ps aux | grep lx
root   359  0.0  0.0  95192  1324 ?    Ssl  08:06   0:00 
/usr/bin/lxcfs /var/lib/lxcfs/

root  1270  0.0  0.0  12784   956 pts/0    S+ 

Re: [lxc-users] Where is stored the list of remote lxds ?

2018-08-25 Thread Pierre Couderc

On 08/25/2018 01:14 PM, Fajar A. Nugraha wrote:


clients store list of remotes (as well as client certificate for that 
user) in ~/.config/lxc/



ah, yes in  ~/.config/lxc/config.yml ...
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Where is stored the list of remote lxds ?

2018-08-25 Thread Pierre Couderc
Paying with lxd to understand it more (and because of a mysterious 
failure), I decide to reinit the whole lxd, as I delete the full 
/var/lib/lxd and excutes lxd init.


So I am surprised that :

lxd remote list

finds me old remote lxds from the old installation...

Are they stored elsewhere ?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to copy "manually" a container ? (updated)

2018-08-23 Thread Pierre Couderc

On 08/23/2018 12:14 PM, Fajar A. Nugraha wrote:
On Thu, Aug 23, 2018 at 2:38 PM, Pierre Couderc <mailto:pie...@couderc.eu>> wrote:


On 08/23/2018 09:24 AM, Fajar A. Nugraha wrote:

On Thu, Aug 23, 2018 at 2:07 PM, Pierre Couderc
mailto:pie...@couderc.eu>> wrote:

On 08/23/2018 07:37 AM, Tamas Papp wrote:


On 08/23/2018 05:36 AM, Pierre Couderc wrote:

If for any reason, "lxc copy" does not work, is it
enough to copy (rsync) /var/lib/lxd/containers/
to another lxd on another computer in
/var/lib/lxd/containers/ ?


Copy the folder (watch out rsync flags) to
/var/lib/lxd/storage-pools/default/containers/, symlink
to /var/lib/lxd/containers and run 'lxd import'.

Thank you very much. It nearlu worked.
Anyway, it fails (in this case) because :
Error: The storage pool's "default" driver "dir" conflicts
with the driver "btrfs" recorded in the container's backup file


If you know how lxd use btrfs to create the container storage
(using subvolume?), you can probably create it manually, and
rsync there.

Or you can create another storage pool, but backed by dir (e.g.
'lxc storage create pool2 dir') instead of btrfs/zfs.

Or yet another way:
- create a new container
- take note where its storage is (e.g. by looking at mount
options, "df -h", etc)
- shutdown the container
- replace the storage with the one you need to restore

-- 
Fajar



Thank you, I think to that.
But what is sure is that my "old" container is labelled as btrfs
and after rsync on a "non btrfs" volume, the btrfs label remains



You can edit backup.yaml to reflect the changes. Here's an example on 
my system:

Fine, it was the missing point !
Thank you very much

In the case of "copy" (instead of backup and restore) like in my case, 
you'd want to change "volatile.eth0.hwaddr" too. Otherwise you'd end 
up with multiple containers with the same MAC and IP address.
No need in my case : it is only a workaround as for some mistery, my 
"lxc *move*" fails.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to copy "manually" a container ?

2018-08-23 Thread Pierre Couderc

On 08/23/2018 12:14 PM, Fajar A. Nugraha wrote:
On Thu, Aug 23, 2018 at 2:38 PM, Pierre Couderc <mailto:pie...@couderc.eu>> wrote:


On 08/23/2018 09:24 AM, Fajar A. Nugraha wrote:

On Thu, Aug 23, 2018 at 2:07 PM, Pierre Couderc
mailto:pie...@couderc.eu>> wrote:

On 08/23/2018 07:37 AM, Tamas Papp wrote:


On 08/23/2018 05:36 AM, Pierre Couderc wrote:

If for any reason, "lxc copy" does not work, is it
enough to copy (rsync) /var/lib/lxd/containers/
to another lxd on another computer in
/var/lib/lxd/containers/ ?


Copy the folder (watch out rsync flags) to
/var/lib/lxd/storage-pools/default/containers/, symlink
to /var/lib/lxd/containers and run 'lxd import'.

Thank you very much. It nearlu worked.
Anyway, it fails (in this case) because :
Error: The storage pool's "default" driver "dir" conflicts
with the driver "btrfs" recorded in the container's backup file


If you know how lxd use btrfs to create the container storage
(using subvolume?), you can probably create it manually, and
rsync there.

Or you can create another storage pool, but backed by dir (e.g.
'lxc storage create pool2 dir') instead of btrfs/zfs.

Or yet another way:
- create a new container
- take note where its storage is (e.g. by looking at mount
options, "df -h", etc)
- shutdown the container
- replace the storage with the one you need to restore

-- 
Fajar



Thank you, I think to that.
But what is sure is that my "old" container is labelled as btrfs
and after rsync on a "non btrfs" volume, the btrfs label remains



You can edit backup.yaml to reflect the changes. Here's an example on 
my system:

Fine, it was the missing point !
Thank you very much

In the case of "copy" (instead of backup and restore) like in my case, 
you'd want to change "volatile.eth0.hwaddr" too. Otherwise you'd end 
up with multiple containers with the same MAC and IP address.
No need in my case : it is only a workaround as for some mistery, my 
"lxc copy" fails.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to copy "manually" a container ?

2018-08-23 Thread Pierre Couderc

On 08/23/2018 09:24 AM, Fajar A. Nugraha wrote:
On Thu, Aug 23, 2018 at 2:07 PM, Pierre Couderc <mailto:pie...@couderc.eu>> wrote:


On 08/23/2018 07:37 AM, Tamas Papp wrote:


On 08/23/2018 05:36 AM, Pierre Couderc wrote:

If for any reason, "lxc copy" does not work, is it enough
to copy (rsync) /var/lib/lxd/containers/ to another
lxd on another computer in /var/lib/lxd/containers/ ?


Copy the folder (watch out rsync flags) to
/var/lib/lxd/storage-pools/default/containers/, symlink to
/var/lib/lxd/containers and run 'lxd import'.

Thank you very much. It nearlu worked.
Anyway, it fails (in this case) because :
Error: The storage pool's "default" driver "dir" conflicts with
the driver "btrfs" recorded in the container's backup file


If you know how lxd use btrfs to create the container storage (using 
subvolume?), you can probably create it manually, and rsync there.


Or you can create another storage pool, but backed by dir (e.g. 'lxc 
storage create pool2 dir') instead of btrfs/zfs.


Or yet another way:
- create a new container
- take note where its storage is (e.g. by looking at mount options, 
"df -h", etc)

- shutdown the container
- replace the storage with the one you need to restore

--
Fajar


Thank you, I think to that.
But what is sure is that my "old" container is labelled as btrfs and 
after rsync on a "non btrfs" volume, the btrfs label remains


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to copy "manually" a container ?

2018-08-23 Thread Pierre Couderc

On 08/23/2018 07:37 AM, Tamas Papp wrote:


On 08/23/2018 05:36 AM, Pierre Couderc wrote:
If for any reason, "lxc copy" does not work, is it enough to copy 
(rsync) /var/lib/lxd/containers/ to another lxd on another 
computer in /var/lib/lxd/containers/ ?


Copy the folder (watch out rsync flags) to 
/var/lib/lxd/storage-pools/default/containers/, symlink to 
/var/lib/lxd/containers and run 'lxd import'.



Thank you very much. It nearlu worked.
Anyway, it fails (in this case) because :
Error: The storage pool's "default" driver "dir" conflicts with the 
driver "btrfs" recorded in the container's backup file

...
Thank you anyway
PC
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How to copy "manually" a container ?

2018-08-22 Thread Pierre Couderc
If for any reason, "lxc copy" does not work, is it enough to copy  
(rsync) /var/lib/lxd/containers/ to another lxd on another computer 
in /var/lib/lxd/containers/ ?


PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Error transferring container data:

2018-08-21 Thread Pierre Couderc



On 08/18/2018 05:48 PM, Pierre Couderc wrote:

On 08/18/2018 04:16 PM, Stéphane Graber wrote:

On Sat, Aug 18, 2018 at 12:02:02PM +0200, Pierre Couderc wrote:

Error: Failed container creation:
  - https://192.168.163.1:8443: Error transferring container data: 
exit status 12
  - https://[2a01:a34:eaaf:c5f0:ca60:ff:fa5a:fd23]:8443: Error 
transferring container data: websocket: bad handshake

nous@couderc:~$


I have tried :
nous@couderc:~$rsync -avz 
root@192.168.163.1:/var/lib/lxd/containers/debian/    .

and too :
root@server:~# rsync -avz /var/lib/lxd/containers/debian/ 
nous@192.168.163.253:ttt


work without problem

(in this "LXD only server", only root is used : no non LXD application, 
no other user).

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Error transferring container data:

2018-08-18 Thread Pierre Couderc

On 08/18/2018 04:16 PM, Stéphane Graber wrote:

On Sat, Aug 18, 2018 at 12:02:02PM +0200, Pierre Couderc wrote:

Error: Failed container creation:
  - https://192.168.163.1:8443: Error transferring container data: exit status 
12
  - https://[2a01:a34:eaaf:c5f0:ca60:ff:fa5a:fd23]:8443: Error transferring 
container data: websocket: bad handshake
nous@couderc:~$

Thanks for any help
PC

exit status 12 usually indicates rsync having been a bit unhappy, it's
"Error in rsync protocol data stream".

Try running:
  - lxc monitor
  - lxc monitor 192.168.163.1:


Thak you : I "catch" :
metadata:
  context: {}
  level: eror
  message: |
    Rsync send failed: /var/lib/lxd/containers/debian/: exit status 12: 
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
    rsync error: error in rsync protocol data stream (code 12) at 
io.c(235) [sender=3.1.2]

timestamp: 2018-08-18T17:38:30.805500383+02:00
type: logging


I have tried :
nous@couderc:~$rsync -avz 
root@192.168.163.1:/var/lib/lxd/containers/debian/    .


and rsync works without problem.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Error transferring container data:

2018-08-18 Thread Pierre Couderc

I have this error. All is here :

nous@couderc:~$ lxc version
Client version: 3.3
Server version: 3.3
nous@couderc:~$ lxc version 192.168.163.1:
Client version: 3.3
Server version: 2.21
nous@couderc:~$ lxc list 192.168.163.1:
+---+-+---+--++---+
|   NAME    |  STATE  | IPV4  | IPV6
 |    TYPE    | SNAPSHOTS |
+---+-+---+--++---+
|   | RUNNING | 192.168.163.30 (eth0) | 
2a01:e34:eeaf:c5f0:216:3eff:fe92:44 (eth0)   | PERSISTENT | 0 |
+---+-+---+--++---+
| debian    | STOPPED |   | 
 | PERSISTENT | 0 |
+---+-+---+--++---+
|   | STOPPED |   | 
 | PERSISTENT | 0 |
+---+-+---+--++---+
nous@couderc:~$ lxc list
+--+---+--+--+--+---+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--+---+--+--+--+---+
nous@couderc:~$ lxc copy  192.168.163.1:debian debian
Error: Failed container creation:
 - https://192.168.163.1:8443: Error transferring container data: exit status 12
 - https://[2a01:a34:eaaf:c5f0:ca60:ff:fa5a:fd23]:8443: Error transferring 
container data: websocket: bad handshake
nous@couderc:~$

Thanks for any help
PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-13 Thread Pierre Couderc



On 08/13/2018 04:53 AM, Stéphane Graber wrote:
The way you invoked sudo will strip your LD_LIBRARY_PATH causing that 
error.

You can instead run "sudo -E -s" and then run "lxd --group sudo" and it
should then start fine.


Thank you : it starts !
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-12 Thread Pierre Couderc


On 08/12/2018 07:33 PM, Pierre Couderc wrote:



On 08/12/2018 05:07 PM, Stéphane Graber wrote:

.


Thank  you Stéphane, it builds on Debian 9.5, Now, I shall install it !

But it fails to start with :
nous@couderc:~/go/src/github.com/lxc/lxd$ sudo -E $GOPATH/bin/lxd 
--group sudo
/home/nous/go/bin/lxd: error while loading shared libraries: 
libdqlite.so.0: cannot open shared object file: No such file or directory



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-12 Thread Pierre Couderc



On 08/12/2018 05:07 PM, Stéphane Graber wrote:

On Sun, Aug 12, 2018 at 09:09:41AM +0200, Pierre Couderc wrote:



Install libcap-dev, that should provide that header. I'll look at
updating the dependencies (external to Go anyway).


Thank  you Stéphane, it builds on Debian 9.5, Now, I shall install it !
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-12 Thread Pierre Couderc


On 08/11/2018 05:07 PM, Fajar A. Nugraha wrote:

I believe the script basically needs this file:

/usr/lib/pkgconfig/sqlite.pc

On ubuntu 18.04, it's provided by libsqlite0-dev.

If you install sqlite manually from source (e.g. you have it on
/usr/local/lib/pkgconfig/sqlite.pc), you could probably do
something like "export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig"
(or wherever the file is located) before building lxc.



Sorry, it should be "/usr/lib/x86_64-linux-gnu/pkgconfig/sqlite3.pc" 
and "libsqlite3-dev" on Ubuntu 18.04



Thank you very much, Fajar, you are right.
Except that lxd seems to use more recent version or maybe a custom 
version of sqlite3.

But the patch of Stéphane did the job...
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-12 Thread Pierre Couderc



On 08/11/2018 11:45 PM, Stéphane Graber wrote:


https://github.com/lxc/lxd/pull/4908 will take care of the
PKG_CONFIG_PATH problem. With that I can build fine without
libsqlite3-dev on my system.

Merci pour la réaction du samedi soir...

I had too to install libuv1-dev. I suggest updating the requirements in 
https://github.com/lxc/lxd/blob/master/README.md


I have progressed but I am blocked (on debian 9.5) now by :

shared/idmap/shift_linux.go:23:28: fatal error: sys/capability.h: No 
such file or directory

 #include 

I have tried (without knowing nothing of what is  it) :

sudo ln -s /usr/include/linux/capability.h /usr/include/sys/capability.h

but it is not better :  unknown type name 'uint32_t'...


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-11 Thread Pierre Couderc

On 08/11/2018 05:50 PM, Andrey Repin wrote:


Enable source repos.
Please explain me better. At my knowledge, lxd is not available under 
stretch... which source repos are you speaking of ?

Run
apt-get source 
apt-get build-dep 



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc build failure

2018-08-11 Thread Pierre Couderc



On 08/11/2018 03:36 PM, Andrey Repin wrote:

Greetings, Pierre Couderc!


Thank you very much, Andrey !

What OS(distribution) you are using?

Debian stretch
And I have used :
https://github.com/AlbanVidal/make-deb-lxd/blob/master/00_install_required_packages.sh


Did you install sqlite3 developer package?


In fact, I do not even understand why sqlit3 is required. I do not 
intend to install test tools

sqlite3 is not indicated here : https://github.com/lxc/lxd/

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxd build failure

2018-08-11 Thread Pierre Couderc

Sorry : lxd build failure


On 08/11/2018 08:37 AM, Pierre Couderc wrote:
Trying to build lxd from sources, I get a message about sqlite3 
missing, and an invite to "make deps".


But it fails too with :


No package 'sqlite3' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables sqlite_CFLAGS
and sqlite_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details


And the man pkg-config is not clear to me...

Thanks  for help.


PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc build failure

2018-08-11 Thread Pierre Couderc
Trying to build lxd from sources, I get a message about sqlite3 missing, 
and an invite to "make deps".


But it fails too with :


No package 'sqlite3' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables sqlite_CFLAGS
and sqlite_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details


And the man pkg-config is not clear to me...

Thanks  for help.


PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] LXD move container to another pool ?

2018-08-10 Thread Pierre Couderc



On 08/10/2018 05:27 AM, Fajar A. Nugraha wrote:
On Thu, Aug 9, 2018 at 7:57 PM, Pierre Couderc <mailto:pie...@couderc.eu>> wrote:



On 08/09/2018 11:30 AM, Fajar A. Nugraha wrote:


Basically you'd just need to copy /var/lib/lxd and whatever
storage backend you use (I use zfs), and then copy them back
later. Since I also put /var/lib/lxd on zfs (this is a custom
setup), I simply need to export-import my pool.



/var/lib/lxc alone, nothing about /var/lxc ?



Are you using lxc1 (e.g. lxc-create commands) or lxd?

When lxd is installed as package (e.g. installed as apt on ubuntu), 
you only need /var/lib/lxd and its storage pool (which will be mounted 
on /var/lib/lxd/storage-pools/...).


Here's what I'm using:
- I start AWS spot instance
- I have a custom ubuntu template, with lxd installed but not started. 
It thus has an empty /var/lib/lxd,  with no storage pools and network.
- I have a separate EBS disk, used by a zfs pool 'data'. I then have 
'data/lib/lxd' which I mount as '/var/lib/lxd', and 'data/lxd' which 
is registered as lxd storage pool 'default'.

- I create containers (using that default pool)
- if that spot instance is terminated (thus the "root"/OS disk is 
lost), I can simply create a new spot instance again, and attach the 
'data' pool there. I will then have access to all my containers.

ok, fine


Is that similar to what you need?

Yes, this is very similar. Thank you.


Note that lxc1 and lxd from snap uses different directories than lxd 
from package.



Sorry for the noise : I use lxd (from sources on debian), and I had not 
seen that /var/lib/lxc exits but is empty...


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the state of the art for lxd and wifi ?

2018-07-23 Thread Pierre Couderc


On 07/23/2018 12:37 PM, Fajar A. Nugraha wrote:
On Mon, Jul 23, 2018 at 5:33 PM, Pierre Couderc <mailto:pie...@couderc.eu>> wrote:



On 07/23/2018 12:12 PM, Fajar A. Nugraha wrote:

Relevant to all VM-like in general (including lxd, kvm and
virtualbox):
- with the default bridged setup (on lxd this is lxdbr0),
VMs/containers can access internet
(...)
- bridges (including macvlan) does not work on wifi



Sorry, it is not clear for me how default bridges "can access
internet",  if simultaneously "bridges (including macvlan) does
not work on wifi" ?



My bad for not being clear :)

I meant, the default setup uses bridge + NAT (i.e. lxdbr0). The NAT is 
automatically setup by LXD. That works. If your PC can access the 
internet, then anything on your container (e.g. wget, firefox, etc) 
can access the internet as well.



Bridge setups WITHOUT nat (those that bridge containers interface 
directly to your host interface, e.g. eth0 or wlan), on the other 
hand, will only work for wired, and will not work for wireless.




Mmm, do you mean that there is no known solution to use LXD with wifi ?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] What is the state of the art for lxd and wifi ?

2018-07-23 Thread Pierre Couderc


On 07/23/2018 12:12 PM, Fajar A. Nugraha wrote:

Relevant to all VM-like in general (including lxd, kvm and virtualbox):
- with the default bridged setup (on lxd this is lxdbr0), 
VMs/containers can access internet

(...)
- bridges (including macvlan) does not work on wifi


Sorry, it is not clear for me how default bridges "can access 
internet",  if simultaneously "bridges (including macvlan) does not work 
on wifi" ?


My PC has no ethernet only wifi.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] What is the state of the art for lxd and wifi ?

2018-07-23 Thread Pierre Couderc

Where can I find a howto for lxd on a an ultramobile with wifi only ?

I find some posts aged 2014 and more modern posts saying it is not 
possible with wifi.


I want to install many containers accessing internet, or being acessed 
from internet.


Thanks

PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is it possible to use realtime processing in a lxd container ?

2018-03-22 Thread Pierre Couderc


On 03/21/2018 11:56 PM, Stéphane Graber wrote:

A privileged container migth be able to set some of those scheduling
flags, an unprivileged container will not be able to for sure.


Thank you. My mistake is that I - wrongly - thought that LXD worked only 
wiyh unprivileged containers...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is it possible to use realtime processing in a lxd container ?

2018-03-21 Thread Pierre Couderc

No answer, it is  it is not possible ?

Please explain me..


On 03/14/2018 04:46 PM, Pierre Couderc wrote:

When I try to start freeswitcth in freeswitch.service with :

IOSchedulingClass=realtime

it fails, but seems to start when I comment it.

So my question is : is it possible ?

And if yes, how to parametrize the container ?

Is there some howto ?

Thank you in advance

Pierre Couderc


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Is it possible to use realtime processing in a lxd container ?

2018-03-14 Thread Pierre Couderc

When I try to start freeswitcth in freeswitch.service with :

IOSchedulingClass=realtime

it fails, but seems to start when I comment it.

So my question is : is it possible ?

And if yes, how to parametrize the container ?

Is there some howto ?

Thank you in advance

Pierre Couderc


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Freeswitch refuses to start in Debian Jessie

2018-03-10 Thread Pierre Couderc

On 01/08/2018 06:31 AM, Rajil Saraswat wrote:

On 01/07/2018 10:41 PM, Stéphane Graber wrote:

On Sun, Jan 07, 2018 at 08:46:40PM -0500, Stéphane Graber wrote:

Given that it's reporting a scheduler problem, it's likely that one (or more)
of the 4 keys above at the problem as some of those actions won't be
allowed in a container.

You could edit the unit and comment those, then run "systemctl daemon-reload"
and try starting the service again.

If that does the trick, then you should be able to make this cleaner by
using a systemd unit override file instead.



Commenting out the following worked,

#IOSchedulingClass=realtime
#IOSchedulingPriority=2
#CPUSchedulingPolicy=rr



Sure, it works.
Thank you, Rajil, Stéphane, you help me fine as I have the same problem.
But at the cost of IOSchedulingClass not realtime.
This may be a problem with freeswitch.

Is there a way to parametrize the container so that 
IOSchedulingClass=realtime is OK ?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] How to build LXD 2.0 ?

2018-01-11 Thread Pierre Couderc

On 12/27/2017 08:26 PM, Pierre Couderc wrote:


I have built LXD from sources with

go get github.com/lxc/lxd
cd  $GOPATH/src/github.com/lxc/lxd
make

I suppose that gives me master branch, but for produciton I thnk 
better to use 2.0.


1- Do I think good ?

2- How to do that ?



I anwer myself to question 2:

go get github.com/lxc/lxd
cd  $GOPATH/src/github.com/lxc/lxd
git checkout lxd-2.21
make



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] How to build LXD 2.0 ?

2017-12-27 Thread Pierre Couderc

I have built LXD from sources with

go get github.com/lxc/lxd
cd  $GOPATH/src/github.com/lxc/lxd
make

I suppose that gives me master branch, but for produciton I thnk better 
to use 2.0.


1- Do I think good ?

2- How to do that ?

Thanks devoloppers for thi sgreat LXD.

PC

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is there a reference manual about LXD ?

2017-10-25 Thread Pierre Couderc

On 10/25/2017 11:45 AM, Simos Xenitellis wrote:

On Sat, Oct 21, 2017 at 3:34 AM, Pierre Couderc <pie...@couderc.eu> wrote:

I have installed LXD on stretch following Stéphane
https://stgraber.org/2017/01/18/lxd-on-debian.


But, this LXD was a trial. I have installed, it works immediately !

But using all defaults, it is not what I need and I want to reinstall it :

And how do I..?

I blogged about this at https://blog.simos.info/how-to-initialize-lxd-again/
In general, there are a few differences between LXD 2.0.x and LXD 2.18+.


Thank you very much, it is very useful. And I had found it.
But I did not use it because, as it was a new installation, I found 
simpler to resinstall the full debian and init LXD correclly...

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is there a reference manual about LXD ?

2017-10-21 Thread Pierre Couderc



On 10/21/2017 06:12 AM, Fajar A. Nugraha wrote:



Note that it's MUCH easier to use lxd on ubuntu 16.04, with 
xenial-backports to get the 'best' combination of 'new features' and 
'tested'. It has lxd 2.18, with support for storage pools. If you're 
using this version, the most relevant documentation would be from git 
master branch: https://github.com/lxc/lxd/tree/master/doc



Thank you very much
If you're using it for production and want long term support, use the 
default xenial repository instead (not backports), which has lxd 
2.0.x. It's supported for longer time, but doesn't have new features 
(like storage pools). The relevant docs for this version is either 
https://github.com/lxc/lxd/tree/stable-2.0/doc or 
https://help.ubuntu.com/lts/serverguide/lxd.html


2- How do I erase my first trial : I try to reinit but i says me
that :

The requested storage pool "default" already exists. Please choose
another name.

How do I erase the the  storage pool "default" ?


Might be hard if you're using file-backed zfs-pool. On ubuntu it's 
probably something like this:

- systemctl disable lxd
- reboot
- rename /var/lib/lxd to something else, then create an empty /var/lib/lxd
- systemctl enable lxd
- systemctl start lxd
- lxd init

I'm not sure how the path and startup script would translate to debian 
+ lxd from snapd (which is in the link you mentioned)



I have not successed. But as it is a new server, I reinstall all !



3- My true problem is that I do not want the NAT for my new lxc
containers but that they use the normal addresses on my local
network. How do I do that ?


The usual way:
- create your own bridge, e.g. br0 in 
https://help.ubuntu.com/community/NetworkConnectionBridge (that 
example bridges eth0 and eth1 on the same bridge. use the relevant 
public interface for your setup)
- configure your container (or profile) to use it (replacing the 
default lxdbr0).

- no need to delete existing lxdbr0, just leave it as is.


The 'new' way: looking at 
https://github.com/lxc/lxd/blob/master/doc/networks.md , it should be 
possible to create the bridge using 'lxc network create ...'


And how do I assign them a MAC address so they  are accessible
from the internet.


This depends on your setup.

For example, if you rent dedicated server from serverloft (or other 
providers with similar networking setup), they do NOT allow bridging 
of VMs to the public network. You need to setup routing instead (long 
story).


But if you're on a LAN, then 'making the containers be on the same LAN 
is the host' is as simple as 'configure the container to use br0' (or 
whatever bridge you create above). If the LAN has a DHCP server, then 
the container will automatically get a 'public' IP addres. If not, 
then configure it statically (just like how you configure a normal 
linux host)


--

Thanl you.



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Is there a reference manual about LXD ?

2017-10-20 Thread Pierre Couderc

Sorry,  I have not fount it.

I have installed LXD on stretch following Stéphane 
https://stgraber.org/2017/01/18/lxd-on-debian.


Fine ! an infinite  progress after my successful install of lxc on 
jessie, it seems to me 20 years ago, following  :


https://myles.sh/configuring-lxc-unprivileged-containers-in-debian-jessie/


But, this LXD was a trial. I have installed, it works immediately !

But using all defaults, it is not what I need and I want to reinstall it :

1-Is there a reference manual about LXD so that  I ask for help here 
after RTFM and not before as now...


2- How do I erase my first trial : I try to reinit but i says me that :

The requested storage pool "default" already exists. Please choose 
another name.


How do I erase the the  storage pool "default" ?

3- My true problem is that I do not want the NAT for my new lxc 
containers but that they use the normal addresses on my local network. 
How do I do that ?


And how do I assign them a MAC address so they  are accessible from the 
internet.


Anyway, a a newby to LXD, the fact that the command  is named sometimes 
lxd other times lxc is a, mmm...,   surprise !


I am sure there is a good logic behind that, but it is  a surprise

Thank you for lxc/lxd !

PC




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxd in Debian

2016-08-23 Thread Pierre Couderc

Mmm, n'est-ce pas le marteau pour écraser la mouche..?



On 08/23/2016 10:45 PM, Tycho Andersen wrote:

On Tue, Aug 23, 2016 at 08:40:33PM +, P. Lowe wrote:

For socket activation of the LXD daemon or socket activation of a container?

For socket activation of the LXD daemon,

https://github.com/lxc/lxd/blob/master/lxd/daemon.go#L866

Tycho


-P. Lowe

Quoting Tycho Andersen :


On Tue, Aug 23, 2016 at 04:56:43PM +, P. Lowe wrote:

Why on earth does lxd depend on "golang-github-coreos-go-systemd-dev"?

I'm also wondering, why should lxd even depend on systemd?

LXD has the capability to be socket activated, this library implements
a go API for handling the case when it is socket activated.

Tycho


-P. Lowe

Quoting "Fajar A. Nugraha" :


On Tue, Aug 23, 2016 at 3:28 PM, Micky Del Favero  wrote:

Paul Dino Jones  writes:


So, i see lxc 2.0 has made it's way into Stretch and Jessie backports,
but I don't see any activity on lxd. Is this going to happen in time
for the Stretch freeze?

I've packaged LXD for Jessie (Devuan's, but the same applied to Debian),
here I've explain what I've do:
http://micky.it/log/compiling-lxd-on-devuan.html
https://lists.linuxcontainers.org/pipermail/lxc-users/2016-July/012045.html
if nobody will package LXD you can do it yourself follow my way.


I'm confused.

How did you managed to get it build, when the source from
http://packages.ubuntu.com/xenial-updates/lxd has

Build-Depends: debhelper (>= 9),
   dh-apparmor,
   dh-golang,
   dh-systemd,
   golang-go,
   golang-go.crypto-dev,
   golang-context-dev,
   golang-github-coreos-go-systemd-dev,
   golang-github-gorilla-mux-dev,
   golang-github-gosexy-gettext-dev,
   golang-github-mattn-go-colorable-dev,
   golang-github-mattn-go-sqlite3-dev,
   golang-github-olekukonko-tablewriter-dev,
   golang-github-pborman-uuid-dev,
   golang-gocapability-dev,
   golang-gopkg-flosch-pongo2.v3-dev,
   golang-gopkg-inconshreveable-log15.v2-dev,
   golang-gopkg-lxc-go-lxc.v2-dev,
   golang-gopkg-tomb.v2-dev,
   golang-goprotobuf-dev,
   golang-petname-dev,
   golang-yaml.v2-dev,
   golang-websocket-dev,
   help2man,
   lxc-dev (>= 1.1.0~),
   pkg-config,
   protobuf-compiler

and https://packages.debian.org/petname returns zero result?
altlinux's lxd rpm (which I use as starting point for my c6 build) has
similar requirement, and when I tried removing golang-petname-dev
requirement when building for centos, the build failed, so I had to
create a new rpm package for that.

--
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users