On Sun, Sep 10, 2017 at 3:02 PM, Vijay Bellur wrote:
>
>
> On Fri, Sep 8, 2017 at 5:56 AM, Pavel Szalbot
> wrote:
>>
>> This is the qemu log of instance:
>>
>> [2017-09-08 09:31:48.381077] C
>> [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
>> 0-gv_openstack_1-client-1: server 10.0.1.202:49152
On 9/10/2017 2:02 AM, Pavel Szalbot wrote:
WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O
situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so
this might be even related to amount of IOPS.
-ps
Well, 20Gbps of writes could overwhelm a lot of DFS clusters. We
On Fri, Sep 8, 2017 at 5:56 AM, Pavel Szalbot
wrote:
> This is the qemu log of instance:
>
> [2017-09-08 09:31:48.381077] C
> [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
> 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
> in the last 1 seconds, disconnecting.
>
>
1 sec
Hey guys,
I got another "reboot crash" with gfapi and this time libvirt-3.2.1
(from cbs.centos.org). Is there anyone who can audit the libgfapi
usage in libvirt? :-)
WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O
situations (fio). Upgrading system (apt-get dist-upgrade) was ok,
I'm on 3.10.5. Its rock solid (at least with the fuse mount )
We are also typically on a somewhat slower GlusterFS LAN network (bonded
2x1G, jumbo frames) so that may be a factor.
I'll try to setup a trusted pool to test libgfapi soon.
I'm curious as to how much faster it is, but the fuse mou
Mh, not so sure really, using libgfapi and it's been working perfectly
fine. And trust me, there had been A LOT of various crashes, reboots and
kill of nodes.
Maybe it's a version thing ? A new bug in the new gluster releases that
doesn't affect our 3.7.15.
On Sat, Sep 09, 2017 at 10:19:24AM -070
On 9/8/2017 11:05 PM, Pavel Szalbot wrote:
When we return the c1g node, we do see a "pause" in the VMs as the shards
heal. By pause meaning a terminal session gets spongy, but that passes
pretty quickly.
Hmm, do you see any errors in VM's dmesg? Or any other reasons for "sponginess"?
No, i
Well, that makes me feel better.
I've seen all these stories here and on Ovirt recently about VMs going
read-only, even on fairly simply layouts.
Each time, I've responded that we just don't see those issues.
I guess the fact that we were lazy about switching to gfapi turns out to
be a poten
Yes, this is my observation so far.
On Sep 9, 2017 13:32, "Gionatan Danti" wrote:
> Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
>
>> Sorry, I did not start the glusterfsd on the node I was shutting
>> yesterday and now killed another one during FUSE test, so it had to
>> crash immediately (onl
Il 09-09-2017 09:09 Pavel Szalbot ha scritto:
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one no
Sorry, I did not start the glusterfsd on the node I was shutting
yesterday and now killed another one during FUSE test, so it had to
crash immediately (only one of three nodes were actually up). This
definitely happened for the first time (only one node had been killed
yesterday).
Using FUSE seems
Hi,
On Sat, Sep 9, 2017 at 2:35 AM, WK wrote:
> Pavel.
>
> Is there a difference between native client (fuse) and libgfapi in regards
> to the crashing/read-only behaviour?
I switched to FUSE now and the VM crashed (read-only remount)
immediately after one node started rebooting.
I tried to mou
I've always wondered what the scenario for these situations are (aside
from the doc description of nodes coming up and down).
Aren't Gluster writes atomic for all nodes? I seem to recall Jeff Darcy
stating that years ago.
So a clean shutdown for maintenance shouldn't be a problem at all. If
Pavel.
Is there a difference between native client (fuse) and libgfapi in
regards to the crashing/read-only behaviour?
We use Rep2 + Arb and can shutdown a node cleanly, without issue on our
VMs. We do it all the time for upgrades and maintenance.
However we are still on native client as we
Well I really do not like the non-deterministic characteristic of it.
However the server crash did never occur in my production environment
- only upgrades and reboots ;-)
-ps
On Fri, Sep 8, 2017 at 2:13 PM, Gandalf Corvotempesta
wrote:
> 2017-09-08 14:11 GMT+02:00 Pavel Szalbot :
>> Gandalf, SI
Btw after few more seconds in SIGTERM scenario, VM kind of revived and
seems to be fine... And after few more restarts of fio job, I got I/O
error.
-ps
On Fri, Sep 8, 2017 at 2:11 PM, Pavel Szalbot wrote:
> Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
> minutes. SIGTERM on
2017-09-08 14:11 GMT+02:00 Pavel Szalbot :
> Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
> minutes. SIGTERM on the other hand causes crash, but this time it is
> not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
> -ps
So, seems to be reliable to server c
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina wrote:
> I currently only have a Windo
I currently only have a Windows 2012 R2 server VM in testing on top of
the gluster storage, so I will have to take some time to provision a
couple Linux VMs with both ext4 and XFS to see what happens on those.
The Windows server VM is OK with killall glusterfsd, but when the 42
second timeout goes
I added firewall rule to block all traffic from Gluster VLAN on one of
the nodes.
Approximately 3 minutes in and no crash so far. Errors about missing
node in qemu instance log are present, but this is normal.
-ps
On Fri, Sep 8, 2017 at 1:53 PM, Gandalf Corvotempesta
wrote:
> 2017-09-08 13:44 G
2017-09-08 13:44 GMT+02:00 Pavel Szalbot :
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit
On Sep 8, 2017 13:36, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> wrote:
2017-09-08 13:21 GMT+02:00 Pavel Szalbot :
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the
2017-09-08 13:21 GMT+02:00 Pavel Szalbot :
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd.
killall should send sig
So even killall situation eventually kills VM (I/O errors).
Gandalf, isn't possible server hard-crash too much? I mean if reboot
reliably kills the VM, there is no doubt network crash or poweroff
will as well.
I am tempted to test this setup on DigitalOcean to eliminate
possibility of my hardware
2017-09-08 13:07 GMT+02:00 Pavel Szalbot :
> OK, so killall seems to be ok after several attempts i.e. iops do not stop
> on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the
> command. I will check the servers console during reboot to see if the VM
> errors appear just after t
OK, so killall seems to be ok after several attempts i.e. iops do not stop
on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the
command. I will check the servers console during reboot to see if the VM
errors appear just after the power cycle and will try to crash the VM after
ki
I would prefer the behavior was different to what it is of I/O stopping.
The argument I heard for the long 42 second time out was that MTBF on a
server was high, and that the client reconnection operation was *costly*.
Those were arguments to *not* change the ping timeout value down from 42
seconds
Btw now I am experiencing "Transport endpoint disconnects" because of
1s ping-timeout even though nodes are up. This sucks. The network is
not overloaded at all, switches are used only by gluster network and
network consists only of three gluster nodes and one VM hypervisor and
Cinder controller (n
On Fri, Sep 8, 2017 at 12:48 PM, Gandalf Corvotempesta
wrote:
> I think this should be considered a bug
> If you have a server crash, glusterfsd process obviously doesn't exit
> properly and thus this could least to IO stop ?
I agree with you completely in this.
__
On Fri, Sep 8, 2017 at 12:43 PM, Diego Remolina wrote:
> This is exactly the problem,
>
> Systemctl stop glusterd does *not* kill the brick processes.
Yes, I now.
> On CentOS with gluster 3.10.x there is also a service, meant to only stop
> glusterfsd (brick processes). I think the reboot proces
On Fri, Sep 8, 2017 at 12:38 PM, Diego Remolina wrote:
> If your VMs use ext4 also check this:
>
> https://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/
I know about this post, but as I pointed out - ping-timeout does not
seem to prevent
Hi Diego,
indeed glusterfsd processes are runnin and it is the reason I do
server reboot instead of systemctl glusterd stop. Is killall different
from reboot in a way glusterfsd processes are terminated in CentOS
(init 1?)?
However I will try this and let you know.
-ps
On Fri, Sep 8, 2017 at
This is the qemu log of instance:
[2017-09-08 09:31:48.381077] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
in the last 1 seconds, disconnecting.
[2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
(--> /li
On Fri, Sep 8, 2017 at 11:42 AM, wrote:
> Oh, you really don't want to go below 30s, I was told.
> I'm using 30 seconds for the timeout, and indeed when a node goes down
> the VM freez for 30 seconds, but I've never seen them go read only for
> that.
>
> I _only_ use virtio though, maybe it's tha
Oh, you really don't want to go below 30s, I was told.
I'm using 30 seconds for the timeout, and indeed when a node goes down
the VM freez for 30 seconds, but I've never seen them go read only for
that.
I _only_ use virtio though, maybe it's that. What are you using ?
On Fri, Sep 08, 2017 at 11:
Back to replica 3 w/o arbiter. Two fio jobs running (direct=1 and
direct=0), rebooting one node... and VM dmesg looks like:
[ 483.862664] blk_update_request: I/O error, dev vda, sector 23125016
[ 483.898034] blk_update_request: I/O error, dev vda, sector 2161832
[ 483.901103] blk_update_request
FYI I set up replica 3 (no arbiter this time), did the same thing -
rebooted one node during lots of file IO on VM and IO stopped.
As I mentioned either here or in another thread, this behavior is
caused by high default of network.ping-timeout. My main problem used
to be that setting it to low val
Seems to be so, but if we look back at the described setup and procedure -
what is the reason for iops to stop/fail? Rebooting a node is somewhat
similar to updating gluster, replacing cabling etc. IMO this should not
always end up with arbiter blaming the other node and even though I did not
inves
True but to work your way into that problem with replica 3 is a lot harder
to achieve than with just replica 2 + arbiter.
On 7 September 2017 at 14:06, Pavel Szalbot wrote:
> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthe
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" wrote:
> *shrug* I don't use
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on t
Mh, I never had to do that and I never had that problem. Is that an
arbiter specific thing ? With replica 3 it just works.
On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote:
> you need to set
>
> cluster.server-quorum-ratio 51%
>
> On 6 September 2017 at 10:12, Pavel Szal
you need to set
cluster.server-quorum-ratio 51%
On 6 September 2017 at 10:12, Pavel Szalbot wrote:
> Hi all,
>
> I have promised to do some testing and I finally find some time and
> infrastructure.
>
> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
> replicated volu
Hi all,
I have promised to do some testing and I finally find some time and
infrastructure.
So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
replicated volume with arbiter (2+1) and VM on KVM (via Openstack)
with disk accessible through gfapi. Volume group is set to virt
(gluster vo
Il 04-09-2017 19:27 Ivan Rossi ha scritto:
The latter one is the one I have been referring to. And it is pretty
dangerous Imho
Sorry, I can not find the bug report/entry by myself.
Can you link to some more information, or explain which bug are your
referring and how to trigger it?
Thanks.
The latter one is the one I have been referring to. And it is pretty
dangerous Imho
Il 31/ago/2017 01:19, ha scritto:
> Solved as to 3.7.12. The only bug left is when adding new bricks to
> create a new replica set, now sure where we are now on that bug but
> that's not a common operation (well,
On Sun, Sep 03, 2017 at 10:21:33PM +0200, Gionatan Danti wrote:
> Il 30-08-2017 17:07 Ivan Rossi ha scritto:
> > There has ben a bug associated to sharding that led to VM corruption
> > that has been around for a long time (difficult to reproduce I
> > understood). I have not seen reports on that f
Il 31-08-2017 01:17 lemonni...@ulrar.net ha scritto:
Solved as to 3.7.12. The only bug left is when adding new bricks to
create a new replica set, now sure where we are now on that bug but
that's not a common operation (well, at least for me).
Hi, same question here: is any specific information
Il 30-08-2017 17:07 Ivan Rossi ha scritto:
There has ben a bug associated to sharding that led to VM corruption
that has been around for a long time (difficult to reproduce I
understood). I have not seen reports on that for some time after the
last fix, so hopefully now VM hosting is stable.
Mm
Il 30-08-2017 03:57 Everton Brogliatto ha scritto:
Ciao Gionatan,
I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide
storage for oVirt 4.x and I have had no major issues so far.
I have done online upgrades a couple of times, power losses,
maintenance, etc with no issues. Overal
Solved as to 3.7.12. The only bug left is when adding new bricks to
create a new replica set, now sure where we are now on that bug but
that's not a common operation (well, at least for me).
On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote:
> There has ben a bug associated to sharding th
There has ben a bug associated to sharding that led to VM corruption that
has been around for a long time (difficult to reproduce I understood). I
have not seen reports on that for some time after the last fix, so
hopefully now VM hosting is stable.
2017-08-30 3:57 GMT+02:00 Everton Brogliatto :
Ciao Gionatan,
I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide
storage for oVirt 4.x and I have had no major issues so far.
I have done online upgrades a couple of times, power losses, maintenance,
etc with no issues. Overall, it is very resilient.
Important thing to keep in
Il 26-08-2017 07:38 Gionatan Danti ha scritto:
I'll surely give a look at the documentation. I have the "bad" habit
of not putting into production anything I know how to repair/cope
with.
Thanks.
Mmmm, this should read as:
"I have the "bad" habit of not putting into production anything I do N
Il 26-08-2017 01:13 WK ha scritto:
Big +1 on what was Kevin just said. Just avoiding the problem is the
best strategy.
Ok, never run Gluster with anything less than a replica2 + arbiter ;)
However, for the record, and if you really, really want to get deep
into the weeds on the subject, the
On 8/25/2017 2:21 PM, lemonni...@ulrar.net wrote:
This concern me, and it is the reason I would like to avoid sharding.
How can I recover from such a situation? How can I "decide" which
(reconstructed) file is the one to keep rather than to delete?
No need, on a replica 3 that just doesn't happ
>
> This concern me, and it is the reason I would like to avoid sharding.
> How can I recover from such a situation? How can I "decide" which
> (reconstructed) file is the one to keep rather than to delete?
>
No need, on a replica 3 that just doesn't happen. That's the main
advantage of it, th
Il 25-08-2017 21:48 WK ha scritto:
On 8/25/2017 12:56 AM, Gionatan Danti wrote:
We ran Rep2 for years on 3.4. It does work if you are really,really
careful, But in a crash on one side, you might have lost some bits
that were on the fly. The VM would then try to heal.
Without sharding, big VM
Il 25-08-2017 21:43 lemonni...@ulrar.net ha scritto:
I think you are talking about DRBD 8, which is indeed very easy. DRBD 9
on the other hand, which is the one that compares to gluster (more or
less), is a whole other story. Never managed to make it work correctly
either
Oh yes, absolutely DRB
On 8/25/2017 12:43 PM, lemonni...@ulrar.net wrote:
I think you are talking about DRBD 8, which is indeed very easy. DRBD 9
on the other hand, which is the one that compares to gluster (more or
less), is a whole other story. Never managed to make it work correctly
either
Yes, and I noticed
On 8/25/2017 12:56 AM, Gionatan Danti wrote:
WK wrote:
2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2
node with a VM
This is true even if I manage locking at application level (via
virlock or sanlock)?
We ran Rep2 for years on 3.4. It does work if you are real
>
> This surprise me: I found DRBD quite simple to use, albeit I mostly use
> active/passive setup in production (with manual failover)
>
I think you are talking about DRBD 8, which is indeed very easy. DRBD 9
on the other hand, which is the one that compares to gluster (more or
less), is a who
Il 25-08-2017 14:22 Lindsay Mathieson ha scritto:
On 25/08/2017 6:50 PM, lemonni...@ulrar.net wrote:
I run Replica 3 VM hosting (gfapi) via a 3 node proxmox cluster. Have
done a lot of rolling node updates, power failures etc, never had a
problem. Performance is better than any other DFS I've tr
Il 23-08-2017 18:51 Gionatan Danti ha scritto:
Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
Hi, after many VM crashes during upgrades of Gluster, losing network
connectivity on one node etc. I would advise running replica 2 with
arbiter.
Hi Pavel, this is bad news :(
So, in your case at least
On 25/08/2017 6:50 PM, lemonni...@ulrar.net wrote:
Free from a lot of problems, but apparently not as good as a replica 3
volume. I can't comment on arbiter, I only have replica 3 clusters. I
can tell you that my colleagues setting up 2 nodes clusters have_a lot_
of problems.
I run Replica 3 VM
On 25/08/2017 6:50 PM, lemonni...@ulrar.net wrote:
Free from a lot of problems, but apparently not as good as a replica 3
volume. I can't comment on arbiter, I only have replica 3 clusters. I
can tell you that my colleagues setting up 2 nodes clusters have_a lot_
of problems.
I run Replica 3 VM
Il 25-08-2017 10:50 lemonni...@ulrar.net ha scritto:
Yes. Gluster has it's own quorum, you can disable it but that's just a
recipe for a disaster.
Free from a lot of problems, but apparently not as good as a replica 3
volume. I can't comment on arbiter, I only have replica 3 clusters. I
can tell
> This is true even if I manage locking at application level (via virlock
> or sanlock)?
Yes. Gluster has it's own quorum, you can disable it but that's just a
recipe for a disaster.
> Also, on a two-node setup it is *guaranteed* for updates to one node to
> put offline the whole volume?
I thi
Il 25-08-2017 08:32 Gionatan Danti ha scritto:
Hi all,
any other advice from who use (or do not use) Gluster as a replicated
VM backend?
Thanks.
Sorry, I was not seeing messages because I was not subscribed on the
list; I read it from the web.
So it seems that Pavel and WK have vastly diffe
On Thu, Aug 24, 2017 at 10:20 PM, WK wrote:
>
>
> On 8/23/2017 10:44 PM, Pavel Szalbot wrote:
>>
>> Hi,
>>
>> On Thu, Aug 24, 2017 at 2:13 AM, WK wrote:
>>>
>>> The default timeout for most OS versions is 30 seconds and the Gluster
>>> timeout is 42, so yes you can trigger an RO event.
>>
>> I ge
On 8/23/2017 10:44 PM, Pavel Szalbot wrote:
Hi,
On Thu, Aug 24, 2017 at 2:13 AM, WK wrote:
The default timeout for most OS versions is 30 seconds and the Gluster
timeout is 42, so yes you can trigger an RO event.
I get read-only mount within approximately 2 seconds after failed IO.
Hmm, w
Hi,
On Thu, Aug 24, 2017 at 2:13 AM, WK wrote:
> The default timeout for most OS versions is 30 seconds and the Gluster
> timeout is 42, so yes you can trigger an RO event.
I get read-only mount within approximately 2 seconds after failed IO.
> Though it is easy enough to raise as Pavel mention
That really isnt an arbiter issue or for that matter a Gluster issue. We
have seen that with vanilla NAS servers that had some issue or another.
Arbiter simply makes it less likely to be an issue than replica 2 but in
turn arbiter is less 'safe' than replica 3.
However, in regards to Gluster
I remember seeing errors like "Transport endpoint not connected" in
the client logs after ping timeout even with arbiter. Arbiter does not
prevent this.
And if you end up in situation when arbiter blames the only running
brick for given file, you are doomed.
-ps
On Wed, Aug 23, 2017 at 9:26 PM,
Really ? I can't see why. But I've never used arbiter so you probably
know more about this than I do.
In any case, with replica 3, never had a problem.
On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote:
> Hi, I believe it is not that simple. Even replica 2 + arbiter volume
> with defa
Hi, I believe it is not that simple. Even replica 2 + arbiter volume
with default network.ping-timeout will cause the underlying VM to
remount filesystem as read-only (device error will occur) unless you
tune mount options in VM's fstab.
-ps
On Wed, Aug 23, 2017 at 6:59 PM, wrote:
> What he is
On 8/21/2017 1:09 PM, Gionatan Danti wrote:
Hi all,
I would like to ask if, and with how much success, you are using
GlusterFS for virtual machine storage.
My plan: I want to setup a 2-node cluster, where VM runs on the nodes
themselves and can be live-migrated on demand.
I have some que
What he is saying is that, on a two node volume, upgrading a node will
cause the volume to go down. That's nothing weird, you really should use
3 nodes.
On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:
> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
> > Hi, after many VM crashes dur
Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
Hi, after many VM crashes during upgrades of Gluster, losing network
connectivity on one node etc. I would advise running replica 2 with
arbiter.
Hi Pavel, this is bad news :(
So, in your case at least, Gluster was not stable? Something as simple
a
Hi, after many VM crashes during upgrades of Gluster, losing network
connectivity on one node etc. I would advise running replica 2 with
arbiter.
I once even managed to break this setup (with arbiter) due to network
partitioning - one data node never healed and I had to restore from
backups (it wa
On Mon, Aug 21, 2017 at 10:09:20PM +0200, Gionatan Danti wrote:
> Hi all,
> I would like to ask if, and with how much success, you are using
> GlusterFS for virtual machine storage.
Hi, we have similar clusters.
>
> My plan: I want to setup a 2-node cluster, where VM runs on the nodes
> themse
Hi all,
I would like to ask if, and with how much success, you are using
GlusterFS for virtual machine storage.
My plan: I want to setup a 2-node cluster, where VM runs on the nodes
themselves and can be live-migrated on demand.
I have some questions:
- do you use GlusterFS for similar setup
82 matches
Mail list logo