13.07.2016 09:50, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 11:11 AM, Dmitry Melekhov > wrote:
13.07.2016 09:36, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:58 AM, Dmitry Melekhov
On Wed, Jul 13, 2016 at 11:11 AM, Dmitry Melekhov wrote:
> 13.07.2016 09:36, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 10:58 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 09:26, Pranith Kumar Karampuri пишет:
>>
>>
>>
>>
OK, so we already have this in glusterd's statedump. If you execute kill
-SIGUSR1 $(pidof glusterd) you will get a glusterd statedump file with
naming convention as glusterdump..dump. in
/var/run/gluster. This file would contain glusterd.max-op-version value
under [xlator.glusterd.priv] section.
13.07.2016 09:36, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:58 AM, Dmitry Melekhov > wrote:
13.07.2016 09:26, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:50 AM, Dmitry Melekhov
On Wed, Jul 13, 2016 at 10:58 AM, Dmitry Melekhov wrote:
> 13.07.2016 09:26, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 10:50 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 09:16, Pranith Kumar Karampuri пишет:
>>
>>
>>
>>
13.07.2016 09:26, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:50 AM, Dmitry Melekhov > wrote:
13.07.2016 09:16, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:38 AM, Dmitry Melekhov
On Wed, Jul 13, 2016 at 10:50 AM, Dmitry Melekhov wrote:
> 13.07.2016 09:16, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 10:38 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 09:04, Pranith Kumar Karampuri пишет:
>>
>>
>>
>>
13.07.2016 09:19, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:41 AM, Dmitry Melekhov > wrote:
13.07.2016 09:09, Lindsay Mathieson пишет:
On 13 July 2016 at 15:06, Dmitry Melekhov
On Wed, Jul 13, 2016 at 10:41 AM, Dmitry Melekhov wrote:
> 13.07.2016 09:09, Lindsay Mathieson пишет:
>
>> On 13 July 2016 at 15:06, Dmitry Melekhov wrote:
>>
>>> zfs repairs deleted files? wow! :-D
>>>
>>
>> File corruptions (if you have zfs raid configured),
13.07.2016 09:16, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:38 AM, Dmitry Melekhov > wrote:
13.07.2016 09:04, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:29 AM, Dmitry Melekhov
On Wed, Jul 13, 2016 at 10:38 AM, Dmitry Melekhov wrote:
> 13.07.2016 09:04, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 10:29 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 08:56, Pranith Kumar Karampuri пишет:
>>
>>
>>
>>
13.07.2016 09:09, Lindsay Mathieson пишет:
On 13 July 2016 at 15:06, Dmitry Melekhov wrote:
zfs repairs deleted files? wow! :-D
File corruptions (if you have zfs raid configured), which I presumed
you were referring to.
There is only so much a system can do to protect users
On 13 July 2016 at 15:06, Dmitry Melekhov wrote:
> zfs repairs deleted files? wow! :-D
File corruptions (if you have zfs raid configured), which I presumed
you were referring to.
There is only so much a system can do to protect users against
themselves, If you insist on
13.07.2016 09:04, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:29 AM, Dmitry Melekhov > wrote:
13.07.2016 08:56, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:23 AM, Dmitry Melekhov
13.07.2016 09:05, Lindsay Mathieson пишет:
On 13 July 2016 at 14:50, Dmitry Melekhov wrote:
Sorry, I'm talking not about direct data manipulation in bricks as way to
use gluster, I'm talking about problems detection and recovery.
As I already said- if I for some reason ( real
On 13 July 2016 at 14:50, Dmitry Melekhov wrote:
> Sorry, I'm talking not about direct data manipulation in bricks as way to
> use gluster, I'm talking about problems detection and recovery.
> As I already said- if I for some reason ( real case can be only by accident
> ) will
On Wed, Jul 13, 2016 at 10:29 AM, Dmitry Melekhov wrote:
> 13.07.2016 08:56, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 10:23 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 08:46, Pranith Kumar Karampuri пишет:
>>
>>
>>
>>
13.07.2016 08:56, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:23 AM, Dmitry Melekhov > wrote:
13.07.2016 08:46, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:10 AM, Dmitry Melekhov
On Wed, Jul 13, 2016 at 10:23 AM, Dmitry Melekhov wrote:
> 13.07.2016 08:46, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 10:10 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 08:36, Pranith Kumar Karampuri пишет:
>>
>>
>>
>>
13.07.2016 08:46, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 10:10 AM, Dmitry Melekhov > wrote:
13.07.2016 08:36, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 9:35 AM, Dmitry Melekhov
13.07.2016 08:43, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 9:41 AM, Dmitry Melekhov > wrote:
13.07.2016 07:46, Pranith Kumar Karampuri пишет:
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov
On Wed, Jul 13, 2016 at 10:10 AM, Dmitry Melekhov wrote:
> 13.07.2016 08:36, Pranith Kumar Karampuri пишет:
>
>
>
> On Wed, Jul 13, 2016 at 9:35 AM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>> 13.07.2016 01:52, Anuradha Talur пишет:
>>
>>>
>>> -
On Wed, Jul 13, 2016 at 9:41 AM, Dmitry Melekhov wrote:
> 13.07.2016 07:46, Pranith Kumar Karampuri пишет:
>
>
>
> On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
>>
>>
>> 12.07.2016 17:39, Pranith Kumar Karampuri пишет:
>>
>>
13.07.2016 08:36, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 9:35 AM, Dmitry Melekhov > wrote:
13.07.2016 01:52, Anuradha Talur пишет:
- Original Message -
From: "Dmitry Melekhov"
On Wed, Jul 13, 2016 at 9:35 AM, Dmitry Melekhov wrote:
> 13.07.2016 01:52, Anuradha Talur пишет:
>
>>
>> - Original Message -
>>
>>> From: "Dmitry Melekhov"
>>> To: "Pranith Kumar Karampuri"
>>> Cc: "gluster-users"
13.07.2016 07:44, Pranith Kumar Karampuri пишет:
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov > wrote:
12.07.2016 17:38, Pranith Kumar Karampuri пишет:
Did you wait for heals to complete before upgrading second node?
no...
So
13.07.2016 07:46, Pranith Kumar Karampuri пишет:
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov > wrote:
12.07.2016 17:39, Pranith Kumar Karampuri пишет:
Wow, what are the steps to recreate the problem?
just set file length to
13.07.2016 00:20, Darrell Budic пишет:
FYI, it’s my experience that “yum upgrade” will stop the running
glistered (and possibly the running glusterfsds)
This is main point- looks like glusterfsds are not stopped by upgrade,
not sure, though...
Thank you!
13.07.2016 01:52, Anuradha Talur пишет:
- Original Message -
From: "Dmitry Melekhov"
To: "Pranith Kumar Karampuri"
Cc: "gluster-users"
Sent: Tuesday, July 12, 2016 9:27:17 PM
Subject: Re: [Gluster-users] 3.7.13, index
Hi All,
I have replicate gluster volume of 400gb, it has two bricks from two
servers each has xfs fs of 400gb.
i want to delete 200gb of data in the gluster volume, i started delete the
data, its taking more time. even for delete 1gb of data.
can you please guide me, any volume tunning option
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov wrote:
>
>
> 12.07.2016 17:39, Pranith Kumar Karampuri пишет:
>
> Wow, what are the steps to recreate the problem?
>
>
> just set file length to zero, always reproducible.
>
Changing things on the brick i.e. not from gluster
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov wrote:
>
>
> 12.07.2016 17:38, Pranith Kumar Karampuri пишет:
>
> Did you wait for heals to complete before upgrading second node?
>
>
> no...
>
So basically if you have operations in progress on the mount, you should
wait for
On Tue, Jul 12, 2016 at 10:29 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-12 18:15 GMT+02:00, Pranith Kumar Karampuri :
> > Will it be possible to get this profile output for a volume without
> > sharding enabled? If it still doesn't look
On Wednesday 13 July 2016, Pavel Malyshev wrote:
> Hello!
>
> I use gluster for quite a long time and even survived a bunch of upgrades..
> Each time I upgrade, since "op-version" was introduced I scratch my head
> and google a bit to find out which "op-version" should I
On 13 July 2016 at 01:27, David Gossage wrote:
> Wondering what the effects of this are as I noticed I had it off while
> others posting their sharding enabled settings seemed to have it on.
>
> Is this likely to cause issues? I've migrated a handful of dev VM's to
>
- Original Message -
> From: "Dmitry Melekhov"
> To: "Pranith Kumar Karampuri"
> Cc: "gluster-users"
> Sent: Tuesday, July 12, 2016 9:27:17 PM
> Subject: Re: [Gluster-users] 3.7.13, index healing broken?
>
>
>
>
FYI, it’s my experience that “yum upgrade” will stop the running glistered (and
possibly the running glusterfsds) during it’s installation of new gluster
components. I’ve also noticed it starts them back up again during the process.
Ie, yesterday I upgraded a system to 3.7.13:
systemctl stop
Hello!
I use gluster for quite a long time and even survived a bunch of upgrades..
Each time I upgrade, since "op-version" was introduced I scratch my head
and google a bit to find out which "op-version" should I set to my cluster.
I tried to convert current gluster package version to op-version,
2016-07-12 19:18 GMT+02:00, Joe Julian :
> Good find. Please file a bug report.
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
Done
https://bugzilla.redhat.com/show_bug.cgi?id=1355846
This could lead to massive data corruption if someone try to disabled
Good find. Please file a bug report.
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
On 07/12/2016 10:14 AM, Gandalf Corvotempesta wrote:
i don't know if this is an expected behaviour but when switching "off"
sharding from a volume that had shard enabled previously, sharded
files
i don't know if this is an expected behaviour but when switching "off"
sharding from a volume that had shard enabled previously, sharded
files are not more readable. Only the first shard is available,
leading to massive data corruption.
If this is an expected behaviour, removing sharding should
2016-07-12 18:15 GMT+02:00, Pranith Kumar Karampuri :
> Will it be possible to get this profile output for a volume without
> sharding enabled? If it still doesn't look like the one I gave in the mail
> before then we have some debugging to do to find why there are extra
>
Will it be possible to get this profile output for a volume without
sharding enabled? If it still doesn't look like the one I gave in the mail
before then we have some debugging to do to find why there are extra
operations we are seeing.
On Tue, Jul 12, 2016 at 9:27 PM, Gandalf Corvotempesta <
12.07.2016 17:39, Pranith Kumar Karampuri пишет:
Wow, what are the steps to recreate the problem?
just set file length to zero, always reproducible.
On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov > wrote:
12.07.2016 13:33, Pranith Kumar
12.07.2016 17:38, Pranith Kumar Karampuri пишет:
Did you wait for heals to complete before upgrading second node?
no...
On Tue, Jul 12, 2016 at 3:08 PM, Dmitry Melekhov > wrote:
12.07.2016 13:31, Pranith Kumar Karampuri пишет:
On
On Tue, Jul 12, 2016 at 8:24 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-12 14:06 GMT+02:00 Anuradha Talur :
> > I realize I just repeated the same example in different words but I hope
> this answers
> > your questions. So do let us know if
Shard was enabled during this test that previously i did other tests with
shard disabled and speed doesn't change at all.
Il 12 lug 2016 5:17 PM, "Pranith Kumar Karampuri" ha
scritto:
> You got this for single dd workload? Ideally single file dd workload
> should be
Wondering what the effects of this are as I noticed I had it off while
others posting their sharding enabled settings seemed to have it on.
Is this likely to cause issues? I've migrated a handful of dev VM's to
sharded storage and they seem to be running just fine so far.
*David Gossage*
You got this for single dd workload? Ideally single file dd workload should
be dominated by 'WRITE' operation, but seems like it is dominated by too
many FINODELK, I see quite a few mknods too. What is puzzling is the number
of ENTRYLKs which is of the order of 10k. I see some discussion about
2016-07-12 14:06 GMT+02:00 Anuradha Talur :
> I realize I just repeated the same example in different words but I hope this
> answers
> your questions. So do let us know if something is not clear.
Is unclear, probably because i don't understand how gluster works and
because I
2016-07-12 15:55 GMT+02:00 Pranith Kumar Karampuri :
> Could you do the following?
>
> # gluster volume profile start
> # run dd command
> # gluster volume profile info >
> /path/to/file/that/you/need/to/send/us.txt
http://pastebin.com/raw/wcA0i335
Could you do the following?
# gluster volume profile start
# run dd command
# gluster volume profile info >
/path/to/file/that/you/need/to/send/us.txt
On Tue, Jul 12, 2016 at 5:33 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-12 14:02 GMT+02:00 Gandalf
Wow, what are the steps to recreate the problem?
On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov wrote:
> 12.07.2016 13:33, Pranith Kumar Karampuri пишет:
>
> What was "gluster volume heal info" showing when you saw this
> issue?
>
>
> just reproduced :
>
>
> [root@father
Did you wait for heals to complete before upgrading second node?
On Tue, Jul 12, 2016 at 3:08 PM, Dmitry Melekhov wrote:
> 12.07.2016 13:31, Pranith Kumar Karampuri пишет:
>
>
>
> On Mon, Jul 11, 2016 at 2:26 PM, Dmitry Melekhov <
> d...@belkam.com> wrote:
>
Hi,
Thanks to everyone who joined the meeting. Please find the minutes of
today's Gluster Community Bug Triage meeting at the below links.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html
Minutes (text):
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Pranith Kumar Karampuri"
> Cc: "Anuradha Talur" , "gluster-users"
>
> Sent: Tuesday, July 12, 2016 4:32:57 PM
> Subject: Re:
2016-07-12 13:36 GMT+02:00 David Gossage :
> Did you try by chance running 3 transfers at once from one server to all 3
> nodes outside of gluster? Maybe bonding isn't picking alternate routes like
> it would be expected.
As wrote, currently i'm not using bonding but
2016-07-12 14:02 GMT+02:00 Gandalf Corvotempesta
:
> As wrote, currently i'm not using bonding but a single gigabit
> connection and i'm still
> unable to go over 1/4 of the theoretical speed.
>
> 1000/8/3 = 41MB/s
>
> I'm stuck at 10MB/s
Just to clarify: any
On Tue, Jul 12, 2016 at 5:28 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-11 19:31 GMT+02:00 Gandalf Corvotempesta
> :
> > Each disks on each node is able to saturate the network, so I would
> > expect about 950mbit when writing in
Hi all,
This meeting is scheduled for anyone who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
2016-07-12 12:56 GMT+02:00 Pranith Kumar Karampuri :
> For adding new bricks into a replica set you need each brick in the replica
> set to be from different machine. So you can't add all bricks directly from
> just one machine. So how do you get the extra bricks that can be
On Tue, Jul 12, 2016 at 4:17 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-12 12:43 GMT+02:00 Pranith Kumar Karampuri :
> > True. But at the end of 4 replace-bricks, you have 4 bricks from earlier
> > configuration which are empty now, which
2016-07-12 12:43 GMT+02:00 Pranith Kumar Karampuri :
> True. But at the end of 4 replace-bricks, you have 4 bricks from earlier
> configuration which are empty now, which can be re-used. So essentially you
> have 6 empty bricks which can be put back into use, but these can be
On Tue, Jul 12, 2016 at 4:06 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-12 12:34 GMT+02:00 Pranith Kumar Karampuri :
> > If you add 6 new disks to the cluster by bringing in a new node with
> replica
> > count 3 you are essentially are
Yes, no problem.
This is an export domain in ovirt:
gluster v info export
Volume Name: export
Type: Replicate
Volume ID: 0b50341b-9c9a-4868-9c53-7f0ecdb44162
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusternode1:/tank2/export/brick1
Brick2:
2016-07-12 12:34 GMT+02:00 Pranith Kumar Karampuri :
> If you add 6 new disks to the cluster by bringing in a new node with replica
> count 3 you are essentially are adding 2 replica sets. Since you can't have
> replica sets with bricks from same node, we need to get empty
2016-07-11 19:31 GMT+02:00 Gandalf Corvotempesta
:
> Each disks on each node is able to saturate the network, so I would
> expect about 950mbit when writing in parallel to 3 nodes. I'm reaching
> 1/4 of available speed.
I did more tests, even by transfering files
2016-07-12 11:49 GMT+02:00 Pranith Kumar Karampuri :
> Alternatively you can replace 4 selected bricks on the first 3 nodes with
> the 4 disks on the new machine. Now you have 4 bricks that can be reused.
> Form extra 2 replica sets with 3 bricks each and you are done.
This
Frank,
Could you share your volume configuration (`gluster volume info `)?
-Krutika
On Tue, Jul 12, 2016 at 3:44 PM, David Gossage
wrote:
>
>
> On Tue, Jul 12, 2016 at 5:08 AM, Frank Rothenstein <
> f.rothenst...@bodden-kliniken.de> wrote:
>
>> Hi David,
>>
>> As
On Tue, Jul 12, 2016 at 5:08 AM, Frank Rothenstein <
f.rothenst...@bodden-kliniken.de> wrote:
> Hi David,
>
> As I'm on gluster I use
>
> glusterfs.x86_64 3.7.12-2.el7 @centos-gluster37
>
> I already tried to revert back to 3.7.11 but somehow this didn't help.
>
> Do you have more info on
Hi David,
As I'm on gluster I use
glusterfs.x86_64 3.7.12-2.el7 @centos-gluster37
I already tried to revert back to 3.7.11 but somehow this didn't help.
Do you have more info on aio/caching?
FrankAm Dienstag, den 12.07.2016, 04:25 -0500 schrieb David Gossage:
>
> On Tue, Jul 12, 2016 at
On Tue, Jul 12, 2016 at 1:43 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-07-12 9:46 GMT+02:00 Anuradha Talur :
> > Yes you can add single node with 3 bricks. But, given that you are
> keeping the replica count
> > same, these three bricks will be
12.07.2016 13:33, Pranith Kumar Karampuri пишет:
What was "gluster volume heal info" showing when you saw
this issue?
just reproduced :
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on
12.07.2016 13:31, Pranith Kumar Karampuri пишет:
On Mon, Jul 11, 2016 at 2:26 PM, Dmitry Melekhov > wrote:
11.07.2016 12:47, Gandalf Corvotempesta пишет:
2016-07-11 9:54 GMT+02:00 Dmitry Melekhov
What was "gluster volume heal info" showing when you saw this
issue?
On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov wrote:
> Hello!
>
> 3.7.13, 3 bricks volume.
>
> inside one of bricks:
>
> [root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
> -rw-r--r-- 2 root root
On Mon, Jul 11, 2016 at 2:26 PM, Dmitry Melekhov wrote:
> 11.07.2016 12:47, Gandalf Corvotempesta пишет:
>
>> 2016-07-11 9:54 GMT+02:00 Dmitry Melekhov :
>>
>>> We just got split-brain during update to 3.7.13 ;-)
>>>
>> This is an interesting point.
>> Could you
On Tue, Jul 12, 2016 at 4:06 AM, Frank Rothenstein <
f.rothenst...@bodden-kliniken.de> wrote:
> Hi all,
>
> I am stuck with a strange problem using ovirt/gluster on ZFSonLinux.
>
> I detached a data-volume form my Cluster an reimported it to change
> from POSIX-type/glusterfs to
Hi all,
I am stuck with a strange problem using ovirt/gluster on ZFSonLinux.
I detached a data-volume form my Cluster an reimported it to change
from POSIX-type/glusterfs to GlusterFS-Storage. When I try to attach it
to the Cluster I always get a Sanlock error. I checked the logs and what I
2016-07-12 9:46 GMT+02:00 Anuradha Talur :
> Yes you can add single node with 3 bricks. But, given that you are keeping
> the replica count
> same, these three bricks will be replica of each other. It is not so useful
> in case of node
> failures/shutdown.
So, the only way to
- Original Message -
> From: "Anuradha Talur"
> To: "Gandalf Corvotempesta"
> Cc: "gluster-users"
> Sent: Tuesday, July 12, 2016 1:16:00 PM
> Subject: Re: [Gluster-users] Expand distributed replicated
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Anuradha Talur"
> Cc: "gluster-users"
> Sent: Thursday, July 7, 2016 10:14:46 PM
> Subject: Re: [Gluster-users] Expand distributed replicated
81 matches
Mail list logo