The glusterfs 3.12.5 got released on Jan 12th 2018. Apologies for not
sending the announcement mail on time
Release notes for the release can be found at [4].
We still carry following major issue that is reported in the
release-notes as follows,
1.) - Expanding a gluster volume that is
On Thu, Feb 1, 2018 at 9:31 AM, Nithya Balachandran
wrote:
> Hi,
>
> I think we have a workaround for until we have a fix in the code. The
> following worked on my system.
>
> Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
> might need to create the
Please note, the file needs to be copied to all nodes.
On 1 February 2018 at 09:31, Nithya Balachandran
wrote:
> Hi,
>
> I think we have a workaround for until we have a fix in the code. The
> following worked on my system.
>
> Copy the attached file to
Hi,
I think we have a workaround for until we have a fix in the code. The
following worked on my system.
Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
might need to create the *filter* directory in this path.)
Make sure the file has execute permissions. On my system:
Thanks, Raghavendra. I won't be able to test readdir-ahead again because it
caused a lot of issues for my users and I don't have the resources to set
up a test environment right now. I hope someone can help figure this bug
out eventually though.
Cheers,
On Tue, Jan 30, 2018 at 1:36 PM
Amar,
Thanks for your prompt reply. No, I do not plan to fix the code and re-compile.
I was hoping it could be fixed with setting the shared-brick-count or some
other option. Since this is a production system, we will wait until a fix is in
a release.
Thanks,
Eva (865) 574-6894
From:
Hi Freer,
Our analysis is that this issue is caused by
https://review.gluster.org/17618. Specifically, in
'gd_set_shared_brick_count()' from
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c
.
But even if we fix it today, I don't think we have a release planned
Nithya,
Yes, Tami Greene, who is copied on the emails. I will monitor them also and
work with her to get this resolved.
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran
Date: Wednesday, January 31, 2018 at 12:10 PM
To: Eva Freer
Cc: "Greene,
Hi Eva,
I'm sorry but I need to get in touch with another developer to check about
the changes here and he will be available only tomorrow. Is there someone
else I could work with while you are away?
Regards,
Nithya
On 31 January 2018 at 22:00, Freer, Eva B. wrote:
> Nithya,
Nithya,
I will be out of the office for ~10 days starting tomorrow. Is there any way we
could possibly resolve it today?
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran
Date: Wednesday, January 31, 2018 at 11:26 AM
To: Eva Freer
Cc: "Greene,
On 31 January 2018 at 21:50, Freer, Eva B. wrote:
> The values for shared-brick-count are still the same. I did not re-start
> the volume after setting the cluster.min-free-inodes to 6%. Do I need to
> restart it?
>
>
>
That is not necessary. Let me get back to you on this
The values for shared-brick-count are still the same. I did not re-start the
volume after setting the cluster.min-free-inodes to 6%. Do I need to restart it?
Thanks,
Eva (865) 574-6894
From: Nithya Balachandran
Date: Wednesday, January 31, 2018 at 11:14 AM
To: Eva
On 31 January 2018 at 21:34, Freer, Eva B. wrote:
> Nithya,
>
>
>
> Responding to an earlier question: Before the upgrade, we were at 3.103 on
> these servers, but some of the clients were 3.7.6. From below, does this
> mean that “shared-brick-count” needs to be set to 1 for
Nithya,
Responding to an earlier question: Before the upgrade, we were at 3.103 on
these servers, but some of the clients were 3.7.6. From below, does this mean
that “shared-brick-count” needs to be set to 1 for all bricks.
All of the bricks are on separate xfs partitions composed hardware of
Hi Atin,
Yes, agree that you explained it repeatedly. Even we are getting this issue
very rarely and that is when we are doing repeated reboot of system.
We tried to debug it further but not able to identify in which
situation/rare case it is generating the empty info file which is causing
this
I have repeatedly explained this multiple times the way to hit this problem
is *extremely rare* and until and unless you prove us wrong and explain why
do you think you can get into this situation often. I still see that
information is not being made available to us to think through why this fix
Hi Team,
I am facing one issue which is exactly same as mentioned on the below link
https://bugzilla.redhat.com/show_bug.cgi?id=1408431
Also there are some patches available to fix the issue but seems those are
not approved and still discussion is going on
https://review.gluster.org/#/c/16279/
Hello folks,
We're going to be resizing the supercolony.gluster.org on our cloud
provider. This will definitely lead to a small outage for 5 mins. In the
event that something goes wrong in this process, we're taking a 2-hour
window for this outage.
Date: Feb 21
Server: supercolony.gluster.org
18 matches
Mail list logo