Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-02 Thread Tomasz Chmielewski

On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:

Hi,

I'm wondering what kind of storage are you using in your
infrastructure ?
In a multiple LXC/LXD nodes how would you design the storage part to
be redundant and give you the flexibility to start a container from
any host available ?

Let's say I have two (or more) LXC/LXD nodes and I want to be able to
start the containers on one or the other node.
LXD allow to move containers across nodes by transferring the data
from node A to node B but I'm looking to be able to run the containers
on node B if node A is in maintenance or crashed.

There is a lot of distributed file system (gluster, ceph, beegfs,
swift etc..)  but I my case, I like using ZFS with LXD and I would
like to try to keep that possibility .


If you want to stick with ZFS, then your only option is setting up DRBD.


Tomasz Chmielewski
https://lxadm.com
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Benoit GEORGELIN - Association Web4all
Thanks, looks like nobody use LXD in a cluster 

Cordialement, 

Benoît 


De: "Tomasz Chmielewski"  
À: "lxc-users"  
Cc: "Benoit GEORGELIN - Association Web4all"  
Envoyé: Mercredi 2 Novembre 2016 12:01:50 
Objet: Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes 

On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote: 
> Hi, 
> 
> I'm wondering what kind of storage are you using in your 
> infrastructure ? 
> In a multiple LXC/LXD nodes how would you design the storage part to 
> be redundant and give you the flexibility to start a container from 
> any host available ? 
> 
> Let's say I have two (or more) LXC/LXD nodes and I want to be able to 
> start the containers on one or the other node. 
> LXD allow to move containers across nodes by transferring the data 
> from node A to node B but I'm looking to be able to run the containers 
> on node B if node A is in maintenance or crashed. 
> 
> There is a lot of distributed file system (gluster, ceph, beegfs, 
> swift etc..) but I my case, I like using ZFS with LXD and I would 
> like to try to keep that possibility . 

If you want to stick with ZFS, then your only option is setting up DRBD. 


Tomasz Chmielewski 
https://lxadm.com 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Tomasz Chmielewski

ZFS is not a distributed filesystem.

So the only way to do what you want is to use DRBD, and ZFS on top of 
it.



Tomasz Chmielewski
https://lxadm.com


On 2016-11-03 22:42, Benoit GEORGELIN - Association Web4all wrote:

Thanks, looks like nobody use LXD in a cluster

Cordialement,

Benoît

-

DE: "Tomasz Chmielewski" 
À: "lxc-users" 
CC: "Benoit GEORGELIN - Association Web4all"

ENVOYÉ: Mercredi 2 Novembre 2016 12:01:50
OBJET: Re: [lxc-users] Question about your storage on multiple LXC/LXD
nodes

On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:

Hi,

I'm wondering what kind of storage are you using in your
infrastructure ?
In a multiple LXC/LXD nodes how would you design the storage part to
be redundant and give you the flexibility to start a container from
any host available ?

Let's say I have two (or more) LXC/LXD nodes and I want to be able

to

start the containers on one or the other node.
LXD allow to move containers across nodes by transferring the data
from node A to node B but I'm looking to be able to run the

containers

on node B if node A is in maintenance or crashed.

There is a lot of distributed file system (gluster, ceph, beegfs,
swift etc..)  but I my case, I like using ZFS with LXD and I would
like to try to keep that possibility .


If you want to stick with ZFS, then your only option is setting up
DRBD.

Tomasz Chmielewski
https://lxadm.com

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Ron Kelley
We do it slightly differently.  We run LXD containers on Ubuntu 16.04 Virtual 
Machines (inside a virtualized infrastructure).  Each physical server has 
redundant network links to highly-available storage.  Thus, we don't have to 
migrate containers between LXD servers; instead we migrate the Ubuntu VM to 
another server/storage pool.  Additionally, we use BTRFS snapshots inside the 
Ubuntu server to quickly restore backups for the LXD containers themselves.

So far, everything has been rock solid.  The LXD containers work great inside 
Ubuntu VMs (performance, scale, etc).  In the unlikely event we have to migrate 
an LXD container from one server to another, we will simply do an LXD copy 
(with a small maintenance window).

As an aside: I have tried gluster, ceph, and even DRBD in the past w/out much 
success.  Eventually, we went back to NFSv3 servers for performance/stability.  
I am looking into setting up an HA NFSv4 config to address the single point of 
failure with NFS v3 setups.

-Ron




On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote:
> Thanks, looks like nobody use LXD in a cluster 
> 
> Cordialement,
> 
> Benoît 
> 
> 
> *De: *"Tomasz Chmielewski" 
> *À: *"lxc-users" 
> *Cc: *"Benoit GEORGELIN - Association Web4all" 
> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50
> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD
> nodes
> 
> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:
>> Hi,
>>
>> I'm wondering what kind of storage are you using in your
>> infrastructure ?
>> In a multiple LXC/LXD nodes how would you design the storage part to
>> be redundant and give you the flexibility to start a container from
>> any host available ?
>>
>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to
>> start the containers on one or the other node.
>> LXD allow to move containers across nodes by transferring the data
>> from node A to node B but I'm looking to be able to run the containers
>> on node B if node A is in maintenance or crashed.
>>
>> There is a lot of distributed file system (gluster, ceph, beegfs,
>> swift etc..)  but I my case, I like using ZFS with LXD and I would
>> like to try to keep that possibility .
> 
> If you want to stick with ZFS, then your only option is setting up DRBD.
> 
> 
> Tomasz Chmielewski
> https://lxadm.com
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
> 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Benoit GEORGELIN - Association Web4all
Hi Ron, 
sounds like a good way to manage it. Thanks 
How do you handle your Ubuntu 16.04 upgrade / kernel update ? I case of a 
mandatory reboot, your LXD containers will have a downtime but maybe not a 
problem in your situation? 

Regarding ceph, gluster and drdb, the main concern is about 
performance/stability so you are right, NFS could be the "best" way to share 
the data across hyperviseurs 

Cordialement, 

Benoît 


De: "Ron Kelley"  
À: "lxc-users"  
Envoyé: Jeudi 3 Novembre 2016 10:53:05 
Objet: Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes 

We do it slightly differently. We run LXD containers on Ubuntu 16.04 Virtual 
Machines (inside a virtualized infrastructure). Each physical server has 
redundant network links to highly-available storage. Thus, we don't have to 
migrate containers between LXD servers; instead we migrate the Ubuntu VM to 
another server/storage pool. Additionally, we use BTRFS snapshots inside the 
Ubuntu server to quickly restore backups for the LXD containers themselves. 

So far, everything has been rock solid. The LXD containers work great inside 
Ubuntu VMs (performance, scale, etc). In the unlikely event we have to migrate 
an LXD container from one server to another, we will simply do an LXD copy 
(with a small maintenance window). 

As an aside: I have tried gluster, ceph, and even DRBD in the past w/out much 
success. Eventually, we went back to NFSv3 servers for performance/stability. I 
am looking into setting up an HA NFSv4 config to address the single point of 
failure with NFS v3 setups. 

-Ron 




On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote: 
> Thanks, looks like nobody use LXD in a cluster 
> 
> Cordialement, 
> 
> Benoît 
> 
>  
> *De: *"Tomasz Chmielewski"  
> *À: *"lxc-users"  
> *Cc: *"Benoit GEORGELIN - Association Web4all"  
> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50 
> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD 
> nodes 
> 
> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote: 
>> Hi, 
>> 
>> I'm wondering what kind of storage are you using in your 
>> infrastructure ? 
>> In a multiple LXC/LXD nodes how would you design the storage part to 
>> be redundant and give you the flexibility to start a container from 
>> any host available ? 
>> 
>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to 
>> start the containers on one or the other node. 
>> LXD allow to move containers across nodes by transferring the data 
>> from node A to node B but I'm looking to be able to run the containers 
>> on node B if node A is in maintenance or crashed. 
>> 
>> There is a lot of distributed file system (gluster, ceph, beegfs, 
>> swift etc..) but I my case, I like using ZFS with LXD and I would 
>> like to try to keep that possibility . 
> 
> If you want to stick with ZFS, then your only option is setting up DRBD. 
> 
> 
> Tomasz Chmielewski 
> https://lxadm.com 
> 
> 
> ___ 
> lxc-users mailing list 
> lxc-users@lists.linuxcontainers.org 
> http://lists.linuxcontainers.org/listinfo/lxc-users 
> 
___ 
lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Ron Kelley
Hi Benoit,

Our environment is pretty locked down when it comes to upgrades at the Ubuntu 
server level.  We don't upgrade often (mainly for security-related stuff).  
That said, in the event of a mandatory reboot, we take a VM snapshot then take 
a small downtime.  Since Ubuntu 16 (re)boots so quickly, the downtime is 
usually less than 30secs for our servers.  Thus, no extended outage times.  If 
the upgrade fails, we easily roll back to the snapshot.

At this time, NFSv3 is the best solution for us at this time.  Each NFS server 
has multiple NICs, redundant power supplies, etc (real enterprise-class 
systems).  In the event of an NFS server failure, we can reload from our backup 
servers (again, multiple backup servers, etc).

To address the single point of failure for NFS, we have been looking at 
something called ScaleIO.  It is a distributed/replicated block-level storage 
system much like gluster.  You create virtual LUNs and mount them on your 
hypervisor host; the hypervisor is responsible for managing the distributed 
access (think VMFS).  Each hypervisor sees the same LUN which makes VM 
migration simple.  This technology builds a Storage Area Network (SAN) over an 
IP network without expensive Fiber Channel infrastructure.  ScaleIO allows you 
to take multiple HDD failures or even complete storage node failures w/out 
downtime on your storage network.  The software is free for testing but you 
must purchase a support contract to use in production.  Just do a quick search 
for ScaleIO and read the literature.

Let me know if you have more questions...

Thanks,

-Ron




On 11/3/2016 11:38 AM, Benoit GEORGELIN - Association Web4all wrote:
> Hi Ron,
> sounds like a good way to manage it.  Thanks
> How do you handle your Ubuntu 16.04 upgrade / kernel update ? I case of
> a mandatory reboot, your LXD containers will have a downtime but maybe
> not a problem in your situation?
> 
> Regarding ceph, gluster and drdb, the main concern is about
> performance/stability so you are right, NFS could be the "best" way to
> share the data across hyperviseurs 
> 
> Cordialement,
> 
> Benoît 
> 
> 
> *De: *"Ron Kelley" 
> *À: *"lxc-users" 
> *Envoyé: *Jeudi 3 Novembre 2016 10:53:05
> *Objet: *Re: [lxc-users] Question about your storage on multiple
> LXC/LXDnodes
> 
> We do it slightly differently.  We run LXD containers on Ubuntu 16.04
> Virtual Machines (inside a virtualized infrastructure).  Each physical
> server has redundant network links to highly-available storage.  Thus,
> we don't have to migrate containers between LXD servers; instead we
> migrate the Ubuntu VM to another server/storage pool.  Additionally, we
> use BTRFS snapshots inside the Ubuntu server to quickly restore backups
> for the LXD containers themselves.
> 
> So far, everything has been rock solid.  The LXD containers work great
> inside Ubuntu VMs (performance, scale, etc).  In the unlikely event we
> have to migrate an LXD container from one server to another, we will
> simply do an LXD copy (with a small maintenance window).
> 
> As an aside: I have tried gluster, ceph, and even DRBD in the past w/out
> much success.  Eventually, we went back to NFSv3 servers for
> performance/stability.  I am looking into setting up an HA NFSv4 config
> to address the single point of failure with NFS v3 setups.
> 
> -Ron
> 
> 
> 
> 
> On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote:
>> Thanks, looks like nobody use LXD in a cluster
>>
>> Cordialement,
>>
>> Benoît
>>
>> ------------------------
>> *De: *"Tomasz Chmielewski" 
>> *À: *"lxc-users" 
>> *Cc: *"Benoit GEORGELIN - Association Web4all"
> 
>> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50
>> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD
>> nodes
>>
>> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote:
>>> Hi,
>>>
>>> I'm wondering what kind of storage are you using in your
>>> infrastructure ?
>>> In a multiple LXC/LXD nodes how would you design the storage part to
>>> be redundant and give you the flexibility to start a container from
>>> any host available ?
>>>
>>> Let's say I have two (or more) LXC/LXD nodes and I want to be able to
>>> start the containers on one or the other node.
>>> LXD allow to move containers across nodes by transferring the data
>>> from node A to node B but I'm looking to be able to run the containers
>>> on node B if node A is in maint

Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes

2016-11-03 Thread Benoit GEORGELIN - Association Web4all
It's kind of you to share your experience / setup. 
I will have a look on ScaleIO as it seems to be interesting . 

Have a nice day 

Cordialement, 

Benoît 


De: "Ron Kelley"  
À: "lxc-users"  
Envoyé: Jeudi 3 Novembre 2016 12:37:36 
Objet: Re: [lxc-users] Question about your storage on multiple LXC/LXD nodes 

Hi Benoit, 

Our environment is pretty locked down when it comes to upgrades at the Ubuntu 
server level. We don't upgrade often (mainly for security-related stuff). That 
said, in the event of a mandatory reboot, we take a VM snapshot then take a 
small downtime. Since Ubuntu 16 (re)boots so quickly, the downtime is usually 
less than 30secs for our servers. Thus, no extended outage times. If the 
upgrade fails, we easily roll back to the snapshot. 

At this time, NFSv3 is the best solution for us at this time. Each NFS server 
has multiple NICs, redundant power supplies, etc (real enterprise-class 
systems). In the event of an NFS server failure, we can reload from our backup 
servers (again, multiple backup servers, etc). 

To address the single point of failure for NFS, we have been looking at 
something called ScaleIO. It is a distributed/replicated block-level storage 
system much like gluster. You create virtual LUNs and mount them on your 
hypervisor host; the hypervisor is responsible for managing the distributed 
access (think VMFS). Each hypervisor sees the same LUN which makes VM migration 
simple. This technology builds a Storage Area Network (SAN) over an IP network 
without expensive Fiber Channel infrastructure. ScaleIO allows you to take 
multiple HDD failures or even complete storage node failures w/out downtime on 
your storage network. The software is free for testing but you must purchase a 
support contract to use in production. Just do a quick search for ScaleIO and 
read the literature. 

Let me know if you have more questions... 

Thanks, 

-Ron 




On 11/3/2016 11:38 AM, Benoit GEORGELIN - Association Web4all wrote: 
> Hi Ron, 
> sounds like a good way to manage it. Thanks 
> How do you handle your Ubuntu 16.04 upgrade / kernel update ? I case of 
> a mandatory reboot, your LXD containers will have a downtime but maybe 
> not a problem in your situation? 
> 
> Regarding ceph, gluster and drdb, the main concern is about 
> performance/stability so you are right, NFS could be the "best" way to 
> share the data across hyperviseurs 
> 
> Cordialement, 
> 
> Benoît 
> 
>  
> *De: *"Ron Kelley"  
> *À: *"lxc-users"  
> *Envoyé: *Jeudi 3 Novembre 2016 10:53:05 
> *Objet: *Re: [lxc-users] Question about your storage on multiple 
> LXC/LXD nodes 
> 
> We do it slightly differently. We run LXD containers on Ubuntu 16.04 
> Virtual Machines (inside a virtualized infrastructure). Each physical 
> server has redundant network links to highly-available storage. Thus, 
> we don't have to migrate containers between LXD servers; instead we 
> migrate the Ubuntu VM to another server/storage pool. Additionally, we 
> use BTRFS snapshots inside the Ubuntu server to quickly restore backups 
> for the LXD containers themselves. 
> 
> So far, everything has been rock solid. The LXD containers work great 
> inside Ubuntu VMs (performance, scale, etc). In the unlikely event we 
> have to migrate an LXD container from one server to another, we will 
> simply do an LXD copy (with a small maintenance window). 
> 
> As an aside: I have tried gluster, ceph, and even DRBD in the past w/out 
> much success. Eventually, we went back to NFSv3 servers for 
> performance/stability. I am looking into setting up an HA NFSv4 config 
> to address the single point of failure with NFS v3 setups. 
> 
> -Ron 
> 
> 
> 
> 
> On 11/3/2016 9:42 AM, Benoit GEORGELIN - Association Web4all wrote: 
>> Thanks, looks like nobody use LXD in a cluster 
>> 
>> Cordialement, 
>> 
>> Benoît 
>> 
>> ------------------------------------ 
>> *De: *"Tomasz Chmielewski"  
>> *À: *"lxc-users"  
>> *Cc: *"Benoit GEORGELIN - Association Web4all" 
>  
>> *Envoyé: *Mercredi 2 Novembre 2016 12:01:50 
>> *Objet: *Re: [lxc-users] Question about your storage on multiple LXC/LXD 
>> nodes 
>> 
>> On 2016-11-03 00:53, Benoit GEORGELIN - Association Web4all wrote: 
>>> Hi, 
>>> 
>>> I'm wondering what kind of storage are you using in your 
>>> infrastructure ? 
>>> In a multiple LXC/LXD nodes how would you design the storage part to 
>>> be redundant and give you the flexibility to start a container from 
>>> any host available ? 
>>> 
>