Re: [lxc-users] Failed upgrade lxd 2.8 to 2.11 on ubuntu 16.04lts via ppa

2017-03-13 Thread MonkZ
Your procedure worked! It's alive.
https://www.youtube.com/watch?v=QuoKNZjr8_U

MfG
MonkZ

Am 14.03.2017 um 01:53 schrieb Stéphane Graber:
> On Mon, Mar 13, 2017 at 06:18:28PM +0100, MonkZ wrote:
>> hi,
>>
>> i have storage issues after an upgrade:
>>
>> from lxd 2.8 to lxd 2.11, using ppa on ubuntu 16.04LTS.
>>
>> Apparently zfs.img moved to disks/lxd.zfs in /var/lib/lxd/ causing zfs
>> to fail to start via zfs-import-cache.service.
>> I was able to "workaround" it via symlink.
>>
>> LXD still won't start, and aborts with: "lxd[3182]: error: No "source"
>> property found for the storage pool.". Any ideas in this matter?
>>
>> MfG
>> MonkZ
> 
> We've had a couple of reports of similar failure, typically caused by an
> interrupted upgrade (systemd timeout or manual intervention).
> 
> To recover from it, the easiest so far seems to be:
>  - systemctl stop lxd.service lxd.socket
>  - cp /var/lib/lxd/lxd.db /var/lib/lxd/lxd.db.broken
>  - cp /var/lib/lxd/lxd.db.bak /var/lib/lxd/lxd.db
>  - lxd --debug --group lxd
> 
> You should then see LXD start and slowly upgrade the storage for all
> containers. Once it's done and you can interact with it again (can take
> several minutes), then you can ctrl-c that process and do:
> 
>  - systemctl start lxd.socket lxd.service
> 
> Which will have systemd spawn LXD the usual way again.
> 
> 
> We do have some fixes related to this issue in our git tree now to
> automatically discover interrupted upgrades and recover from them
> without having to use the procedure above.
> 
> 
> If this doesn't work for you, please file an issue at
> https://github.com/lxc/lxd/issues so we can find exactly what happened
> to your LXD and give you more specific instructions to recover.
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> 



signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Failed upgrade lxd 2.8 to 2.11 on ubuntu 16.04lts via ppa

2017-03-13 Thread MonkZ
+--+++-+
| NAME | DRIVER | SOURCE | USED BY |
+--+++-+

   | lxd  | zfs| lxd| 22  |
+--+++-+

but only after I used stgrabers procedure.

Am 14.03.2017 um 01:54 schrieb Simos Xenitellis:
> On Mon, Mar 13, 2017 at 7:18 PM, MonkZ  wrote:
>> hi,
>>
>> i have storage issues after an upgrade:
>>
>> from lxd 2.8 to lxd 2.11, using ppa on ubuntu 16.04LTS.
>>
>> Apparently zfs.img moved to disks/lxd.zfs in /var/lib/lxd/ causing zfs
>> to fail to start via zfs-import-cache.service.
>> I was able to "workaround" it via symlink.
>>
>> LXD still won't start, and aborts with: "lxd[3182]: error: No "source"
>> property found for the storage pool.". Any ideas in this matter?
>>
> 
> (I upgraded from 2.0.9 to 2.11)
> 
> What do you get with this command?
> 
> $ lxc storage list
> +--+++-+
> | NAME | DRIVER |   SOURCE   | USED BY |
> +--+++-+
> | lxd  | zfs| /var/lib/lxd/disks/lxd.img | 4   |
> +--+++-+
> $ _
> 
> Simos
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> 



signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Experience with large number of LXC/LXD containers

2017-03-13 Thread Benoit GEORGELIN - Association Web4all
- Mail original -
> De: "Simos Xenitellis" 
> À: "lxc-users" 
> Envoyé: Lundi 13 Mars 2017 20:22:03
> Objet: Re: [lxc-users] Experience with large number of LXC/LXD containers

> On Sun, Mar 12, 2017 at 11:28 PM, Benoit GEORGELIN - Association
> Web4all  wrote:
> > Hi lxc-users ,

> > I would like to know if you have any experience with a large number of
> > LXC/LXD containers ?
> > In term of performance, stability and limitation .

> > I'm wondering for exemple, if having 100 containers behave the same of
> > having 1.000 or 10.000 with the same configuration to avoid to talk about
> > container usage.

> > I have been looking around for a couple of days to found any user/admin
> > feedback experience but i'm not able to find large deployments

> > Is there any ressources limits or any maximum number that can be deployed on
> > the same node ?
> > Beside physical performance of the node, is there any specific behavior that
> > a large number of LXC/LXD containers can experience ? I'm not aware of any
> > test or limits that can occurs beside number of process. But I'm sure from
> > LXC/LXD side it might have some technical contraints ?
> > Maybe on namespace availability , or any other technical layer used by
> > LXC/LXD

> > I will be interested to here from your experience or if you have any
> > links/books/story about this large deployments


> This would be interesting to hear if someone can talk publicly about
> their large deployment.

> In any case, it should be possible to create, for example, 1000 web servers
> and then try to access each one and check any issues regarding the
> response time.
> Another test would be to install 1000 Wordpress installations and
> check again for the response time
> and resource usage.
> Such scripts to create this massive number of containers would also be
> helpful to replicate
> any issues in order to solve them.

> Simos


Yes it's would be very nice to hear about this kind of infrastructure using 
lxc/lxd 
I'm not yet ready to make this kind of testing, but if someone would like to 
work on this with me as a projet, I can provide the technical infrastructure 
and scripts . 
That would be nice to provide a good testing case and analyse to share to the 
community 

Benoit.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Failed upgrade lxd 2.8 to 2.11 on ubuntu 16.04lts via ppa

2017-03-13 Thread Simos Xenitellis
On Mon, Mar 13, 2017 at 7:18 PM, MonkZ  wrote:
> hi,
>
> i have storage issues after an upgrade:
>
> from lxd 2.8 to lxd 2.11, using ppa on ubuntu 16.04LTS.
>
> Apparently zfs.img moved to disks/lxd.zfs in /var/lib/lxd/ causing zfs
> to fail to start via zfs-import-cache.service.
> I was able to "workaround" it via symlink.
>
> LXD still won't start, and aborts with: "lxd[3182]: error: No "source"
> property found for the storage pool.". Any ideas in this matter?
>

(I upgraded from 2.0.9 to 2.11)

What do you get with this command?

$ lxc storage list
+--+++-+
| NAME | DRIVER |   SOURCE   | USED BY |
+--+++-+
| lxd  | zfs| /var/lib/lxd/disks/lxd.img | 4   |
+--+++-+
$ _

Simos
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Failed upgrade lxd 2.8 to 2.11 on ubuntu 16.04lts via ppa

2017-03-13 Thread Stéphane Graber
On Mon, Mar 13, 2017 at 06:18:28PM +0100, MonkZ wrote:
> hi,
> 
> i have storage issues after an upgrade:
> 
> from lxd 2.8 to lxd 2.11, using ppa on ubuntu 16.04LTS.
> 
> Apparently zfs.img moved to disks/lxd.zfs in /var/lib/lxd/ causing zfs
> to fail to start via zfs-import-cache.service.
> I was able to "workaround" it via symlink.
> 
> LXD still won't start, and aborts with: "lxd[3182]: error: No "source"
> property found for the storage pool.". Any ideas in this matter?
> 
> MfG
> MonkZ

We've had a couple of reports of similar failure, typically caused by an
interrupted upgrade (systemd timeout or manual intervention).

To recover from it, the easiest so far seems to be:
 - systemctl stop lxd.service lxd.socket
 - cp /var/lib/lxd/lxd.db /var/lib/lxd/lxd.db.broken
 - cp /var/lib/lxd/lxd.db.bak /var/lib/lxd/lxd.db
 - lxd --debug --group lxd

You should then see LXD start and slowly upgrade the storage for all
containers. Once it's done and you can interact with it again (can take
several minutes), then you can ctrl-c that process and do:

 - systemctl start lxd.socket lxd.service

Which will have systemd spawn LXD the usual way again.


We do have some fixes related to this issue in our git tree now to
automatically discover interrupted upgrades and recover from them
without having to use the procedure above.


If this doesn't work for you, please file an issue at
https://github.com/lxc/lxd/issues so we can find exactly what happened
to your LXD and give you more specific instructions to recover.

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: PGP signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] DHCP or static ip address?

2017-03-13 Thread Simos Xenitellis
On Sun, Mar 5, 2017 at 7:13 AM, Marat Khalili  wrote:
>> The other way is to leave in /var/lib/lxc/NAME/config only [...] and have:
>> [...] in /var/lib/lxc/NAME/rootfs/etc/network/interfaces. Then everything
>> works.
>
>> Which way is better?
>
> I also stumbled on nameservers issue when using config and switched to
> /etc/network/interfaces more than a year ago. No problems so far. Didn't
> find any other way yet.
>
> I'd also recommend you to install local DNS server and automatically put
> names of new containers in its zone by the same script you use to assign IP
> addresses. Helps accessing network services running in containers, if you
> have any.
>
> I don't see any benefits in DHCP _unless_ you use LXD and plan to move your
> containers around.

Do your containers talk to each other? If they do, it is more convenient to
specify "mycontainer1.lxd", "mycontainer2.lxd", "mysql.lxd", etc
inside the containers
instead of dealing with IP addresses.

Simos
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc stop / lxc reboot hang

2017-03-13 Thread Simos Xenitellis
On Wed, Mar 1, 2017 at 1:56 PM, Tomasz Chmielewski  wrote:
> Again seeing this issue on one of the servers:
>
> - "lxc stop container" will stop the container but will never exit
> - "lxc restart container" will stop the container and will never exit
>
>
> # dpkg -l|grep lxd
> ii  lxd  2.0.9-0ubuntu1~16.04.2
> amd64Container hypervisor based on LXC - daemon
> ii  lxd-client   2.0.9-0ubuntu1~16.04.2
> amd64Container hypervisor based on LXC - client
>
>
> This gets logged to container log:
>
> lxc 20170301115514.738 WARN lxc_commands -
> commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive
> response: Connection reset by peer.
>
>
> How can it be debugged?

Try

lxc stop --verbose --debug mycontainer

which should show the exact communication between the lxd-client and LXD.

Have a look at https://github.com/lxc/lxd/issues
I think the report titled "lxc stop waits indefinitely" is similar to
your issue.

Simos

>
> Tomasz
>
>
>
> On 2017-02-03 13:05, Tomasz Chmielewski wrote:
>>
>> On 2017-02-03 12:52, Tomasz Chmielewski wrote:
>>>
>>> Suddenly, today, I'm not able to stop or reboot any of containers:
>>>
>>> # lxc stop some-container
>>>
>>> Just sits there forever.
>>>
>>>
>>> In /var/log/lxd/lxd.log, only this single entry shows up:
>>>
>>> t=2017-02-03T03:46:20+ lvl=info msg="Shutting down container"
>>> creation date=2017-01-19T15:51:21+ ephemeral=false timeout=-1s
>>> name=some-container action=shutdown
>>>
>>>
>>> In /var/log/lxd/some-container/lxc.log, only this one shows up:
>>>
>>> lxc 20170203034624.534 WARN lxc_commands -
>>> commands.c:lxc_cmd_rsp_recv:172 - command get_cgroup failed to receive
>>> response
>>
>>
>> The container actually stops (it's in STOPPED state in "lxc list").
>>
>> The command just never returns.
>>
>>
>> Tomasz Chmielewski
>> https://lxadm.com
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Experience with large number of LXC/LXD containers

2017-03-13 Thread Simos Xenitellis
On Sun, Mar 12, 2017 at 11:28 PM, Benoit GEORGELIN - Association
Web4all  wrote:
> Hi lxc-users ,
>
> I would like to know if you have any experience with a large number of
> LXC/LXD containers ?
> In term of performance, stability and limitation .
>
> I'm wondering for exemple, if having 100 containers behave the same of
> having 1.000 or 10.000  with the same configuration to avoid to talk about
> container usage.
>
> I have been looking around for a couple of days to found any user/admin
> feedback experience but i'm not able to find large deployments
>
> Is there any ressources limits or any maximum number that can be deployed on
> the same node ?
> Beside physical performance of the node, is there any specific behavior that
> a large number of LXC/LXD containers can experience ? I'm not aware of any
> test or limits that can occurs beside number of process. But I'm sure from
> LXC/LXD side it might have some technical contraints ?
> Maybe on namespace availability , or any other technical layer used by
> LXC/LXD
>
> I will be interested to here from your experience or if you have any
> links/books/story about this large deployments
>

This would be interesting to hear if someone can talk publicly about
their large deployment.

In any case, it should be possible to create, for example, 1000 web servers
and then try to access each one and check any issues regarding the
response time.
Another test would be to install 1000 Wordpress installations and
check again for the response time
and resource usage.
Such scripts to create this massive number of containers would also be
helpful to replicate
any issues in order to solve them.

Simos
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Questions on how to "lxd init" again, reinstall, upgrade and downgrade

2017-03-13 Thread Simos Xenitellis
Hi All,

I am writing a post on LXD and
1. how to run "lxd init" again (keeping the ZFS allocated space)
2. how to run "lxd init" again (clearing up the ZFS allocated space,
allocate fresh space)
3. how to upgrade to the latest (currently 2.11) version
4. how to downgrade

Here is how I plan to tackle these issues:
1. how to run "lxd init" again (keeping the ZFS allocated space)
>> requires to "sudo systemctl stop lxd.service"
>> requires to "sudo zpool destroy mylxdpool"
>> then "nmcli connection delete lxdbr0" (to delete the bridge)
|--> now "lxd init" and select to reuse the ZFS device

2. how to run "lxd init" again (clearing up the ZFS allocated space,
allocate fresh space)
>> requires to "sudo systemctl stop lxd.service"
>> requires to "sudo zpool destroy mylxdpool"
>> then "nmcli connection delete lxdbr0"
>> remove everything in /var/lib/lxd/
>> remove everything in /var/log/lxd/
|--> now "lxd init" and select to reuse the ZFS device

3. how to upgrade to the latest (currently 2.11) version
>> Enable the lxd-stable PPA (has 2.11)
|--> then "sudo apt install lxd", which will bring in "lxd-client".

4. how to downgrade
(AFAIK, downgrading is not expected to be tested, so it requires to
remove everything (case [2]),
downgrade "lxd" and "lxd-client", then "lxd init".


If there is any big no-no in the above, I would love to hear about it!

Cheers,
Simos
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Failed upgrade lxd 2.8 to 2.11 on ubuntu 16.04lts via ppa

2017-03-13 Thread MonkZ
hi,

i have storage issues after an upgrade:

from lxd 2.8 to lxd 2.11, using ppa on ubuntu 16.04LTS.

Apparently zfs.img moved to disks/lxd.zfs in /var/lib/lxd/ causing zfs
to fail to start via zfs-import-cache.service.
I was able to "workaround" it via symlink.

LXD still won't start, and aborts with: "lxd[3182]: error: No "source"
property found for the storage pool.". Any ideas in this matter?

MfG
MonkZ



signature.asc
Description: OpenPGP digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] DBUS connection from inside container using system dbus

2017-03-13 Thread Adithya K
Hi Stewart,

 Thanks for the response. With your fix it went ahead but didn't solve the
problem. Now what I am seeing is server is closing the connection before
message is processed. So it is dropped. See attached log and "Not
connected, not writing anything".

 Even you also faced same issue? Any thing you did to solve this?

Thanks,
Adithya

2017-03-10 20:44 GMT+05:30 Stewart Brodie :

> Adithya K  wrote:
>
> > > > I am usig busybox  template to create container on ubuntu. I am
> > > > creating container as non  privilage. Attached is the config created.
> > > > I am mapping var/run/duns/socket from host to container. Basically I
> > > > am using host dbus.
>
> > > > What I see is when I try to run and dbus program,
> > > > dbus_bus_get(DBUS_BUS_SYSTEM, ); call fails. Basically I am not
> > > > able to get dbus bus connection.
>
> > > > When I create container using privilage mode, then this issue doesn't
> > > > exist.
>
> > > > Any solution for this issue.
>
>
> This will not work (as you have discovered!)  This is why ...
>
> The dbus-daemon examines the credentials on the UNIX domain socket, in
> order
> to find out the peer's PID and UID.  If the peer is in a different PID
> and/or UID namespace, the kernel will have remapped the credentials into
> the
> dbus-daemon's namespace.  The client, however, will still try to
> authenticate by passing its UID in the SASL setup for the connection by
> sending "AUTH EXTERNAL ", where  is a hex version of the
> stringification of the effective UID of the client in *its* namespace.
> e.g.
> the UID 789 would be encoded as 373839!  Thus when the dbus-daemon receives
> this UID and compares it to the credentials it found on the socket, it
> finds
> the UIDs don't match and thus it refuses to permit the connection.
>
> For my project, I can afford to disable the SASL part of the connection
> protocol in the client - it would be possible to fix this in the daemon,
> but
> for various reasons I can't do that in my project.  The obvious problem of
> patching the client rather than the server is that you end up having to
> patch all the different client DBus libraries.
>
> I attach an example patch for dbox-1.10.6 that *disables* the sending of
> the
> client UID in the setup message.  If that's acceptable for your situation,
> you're welcome to use it.  There's a second patch for GDBus too.
>
>
> --
> Stewart Brodie
> Senior Software Engineer
> Espial UK
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
/ # dbus receive
Listening for signals
Filling in system bus address...
  used default system bus "unix:path=/var/run/dbus/system_bus_socket"
Filling in session bus address...
  "autolaunch:"
Filling in activation bus address...
  "none set"
opening shared connection to: unix:path=/var/run/dbus/system_bus_socket
checking for existing connection
creating shared_connections hash table
  successfully created shared_connections
client: going from state NeedSendAuth to state WaitingForData
Initialized transport on address unix:path=/var/run/dbus/system_bus_socket
LOCK
UNLOCK
LOCK
UNLOCK
LOCK
UNLOCK
LOCK
Message 0x1c72c10 (method_call /org/freedesktop/DBus org.freedesktop.DBus Hello 
'') for org.freedesktop.DBus added to outgoing queue 0x1c72918, 1 pending to 
send
Message 0x1c72c10 serial is 1
start
UNLOCK
locking io_path_mutex
start connection->io_path_acquired = 0 timeout = 0
end connection->io_path_acquired = 1 we_acquired = 1
unlocking io_path_mutex
LOCK
Transport iteration flags 0x1 timeout -1 connected = 1
client: Sent 16 bytes of: AUTH EXTERNAL 

Not authenticated, not writing anything
end
locking io_path_mutex
start connection->io_path_acquired = 1
unlocking io_path_mutex
end
dispatch status = complete is_connected = 1
UNLOCK
LOCK
UNLOCK
LOCK
doing iteration in
start
UNLOCK
locking io_path_mutex
start connection->io_path_acquired = 0 timeout = -1
end connection->io_path_acquired = 1 we_acquired = 1
unlocking io_path_mutex
LOCK
Transport iteration flags 0x7 timeout -1 connected = 1
UNLOCK
LOCK
client: got command "DATA"
end
locking io_path_mutex
start connection->io_path_acquired = 1
unlocking io_path_mutex
end
doing iteration in
start
UNLOCK
locking io_path_mutex
start connection->io_path_acquired = 0 timeout = -1
end connection->io_path_acquired = 1 we_acquired = 1
unlocking io_path_mutex
LOCK
Transport iteration flags 0x7 timeout -1 connected = 1
UNLOCK
LOCK
client: Sent 6 bytes of: DATA

Not authenticated, not writing anything
end
locking io_path_mutex
start connection->io_path_acquired = 1
unlocking io_path_mutex
end
doing iteration in
start
UNLOCK
locking io_path_mutex
start connection->io_path_acquired = 0 timeout = -1
end connection->io_path_acquired = 1 we_acquired = 1
unlocking io_path_mutex
LOCK
Transport iteration flags 0x7 timeout -1 connected = 1
UNLOCK
LOCK
client: got command "OK 

Re: [lxc-users] lxc-users Digest, Vol 170, Issue 1

2017-03-13 Thread brian mullan
Great question & one I was asked lately by Credit Suisse



On Mar 13, 2017 8:00 AM, 
wrote:

> Send lxc-users mailing list submissions to
> lxc-users@lists.linuxcontainers.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.linuxcontainers.org/listinfo/lxc-users
> or, via email, send a message with subject or body 'help' to
> lxc-users-requ...@lists.linuxcontainers.org
>
> You can reach the person managing the list at
> lxc-users-ow...@lists.linuxcontainers.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of lxc-users digest..."
>
> Today's Topics:
>
>1. Experience with large number of LXC/LXD containers
>   (Benoit GEORGELIN - Association Web4all)
>
>
> -- Forwarded message --
> From: Benoit GEORGELIN - Association Web4all 
> To: lxc-users 
> Cc:
> Bcc:
> Date: Sun, 12 Mar 2017 22:28:45 +0100 (CET)
> Subject: [lxc-users] Experience with large number of LXC/LXD containers
> Hi lxc-users ,
>
> I would like to know if you have any experience with a large number of
> LXC/LXD containers ?
> In term of performance, stability and limitation .
>
> I'm wondering for exemple, if having 100 containers behave the same of
> having 1.000 or 10.000  with the same configuration to avoid to talk about
> container usage.
>
> I have been looking around for a couple of days to found any user/admin
> feedback experience but i'm not able to find large deployments
>
> Is there any ressources limits or any maximum number that can be deployed
> on the same node ?
> Beside physical performance of the node, is there any specific behavior
> that a large number of LXC/LXD containers can experience ? I'm not aware of
> any test or limits that can occurs beside number of process. But I'm sure
> from LXC/LXD side it might have some technical contraints ?
> Maybe on namespace availability , or any other technical layer used by
> LXC/LXD
>
> I will be interested to here from your experience or if you have any
> links/books/story about this large deployments
>
> Thanks
>
> Cordialement,
>
> Benoît G
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users