Adding Poornima to take a look at it and comment.
On Tue, Jan 23, 2018 at 10:39 PM, Alan Orth wrote:
> Hello,
>
> I saw that parallel-readdir was an experimental feature in GlusterFS
> version 3.10.0, became stable in version 3.11.0, and is now recommended for
> small file
The Gluster community is pleased to announce the release of Gluster
3.13.2 (packages available at [1]).
Release notes for the release can be found at [2].
* FIXED: Expanding a gluster volume that is sharded may cause file
corruption
Thanks,
Gluster community
[1] Packages:
Hi,
Yes, of cause...should have included it from start.
Yes, I know an old version, but I will rebuild a new cluster later on,
that is another story.
Client side:
Archlinux
glusterfs 1:3.10.1-1
Sever side:
Replicated cluster on two physical machines.
Both running:
Centos 7
Hello,
I saw that parallel-readdir was an experimental feature in GlusterFS
version 3.10.0, became stable in version 3.11.0, and is now recommended for
small file workloads in the Red Hat Gluster Storage Server
documentation[2]. I've successfully enabled this on one of my volumes but I
notice the
Marcus,
Please paste the name-version-release of the primary glusterfs package on
your system.
If possible, also describe the typical workload that happens at the mount
via the user application.
On Tue, Jan 23, 2018 at 7:43 PM, Marcus Pedersén
wrote:
> Hi all,
> I
Hi all,
I have problem pin pointing an error, that users of
my system experience processes that crash.
The thing that have changed since the craches started
is that I added a gluster cluster.
Of cause the users start to attack my gluster cluster.
I started looking at logs, starting from the
That is great to know, Atin. Thank you for letting me know, and I'm happy
to have helped. :) I'm looking forward to 3.12.5 now!
Cheers,
On Tue, Jan 23, 2018 at 10:36 AM Atin Mukherjee wrote:
> 3.10 doesn't have this regression, so you're safe.
>
> On Tue, Jan 23, 2018 at
Hi,
Thank you for reporting this. It appears to be a problem with 1xn volumes
(single dht subvol) and I could reproduce this with a single brick pure
distribute volume. I have filed a BZ for this [1] and posted a patch.
The messages do not indicate a problem and can be ignored.
Regards,
Nithya
3.10 doesn't have this regression, so you're safe.
On Tue, Jan 23, 2018 at 1:28 PM, Jo Goossens
wrote:
> Hello,
>
>
>
>
>
> Will we also suffer from this regression in any of the (previously) fixed
> 3.10 releases? We kept 3.10 and hope to stay stable :/
>
>
>
>
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen
wrote:
> Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34:
>
>> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
>> wrote:
>>
>> Hi again,
>>>
>>> here is more information regarding issue described
Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34:
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen
wrote:
Hi again,
here is more information regarding issue described earlier
It looks like self healing is stuck. According to "heal statistics"
crawl began at Sat Jan
Hello,
Will we also suffer from this regression in any of the (previously) fixed 3.10
releases? We kept 3.10 and hope to stay stable :/
Regards
Jo
-Original message-
From:Atin Mukherjee
Sent:Tue 23-01-2018 05:15
Subject:Re: [Gluster-users] BUG: After
12 matches
Mail list logo