On 1 February 2018 at 09:46, Milind Changire wrote:
> If a *volume stop* fails at a user's production site with a reason like
> *rebalance session is active* then the admin will wait for the session to
> complete and then reissue a *volume stop*;
>
> So, in essence, the failed volume stop is not
The glusterfs 3.12.5 got released on Jan 12th 2018. Apologies for not
sending the announcement mail on time
Release notes for the release can be found at [4].
We still carry following major issue that is reported in the
release-notes as follows,
1.) - Expanding a gluster volume that is shard
I don't think that's the right way. Ideally the test shouldn't be
attempting to stop a volume if rebalance session is in progress. If we do
see such a situation even with we check for rebalance status and wait till
it finishes for 30 secs and still volume stop fails with rebalance session
in progre
If a *volume stop* fails at a user's production site with a reason like
*rebalance session is active* then the admin will wait for the session to
complete and then reissue a *volume stop*;
So, in essence, the failed volume stop is not fatal; for the regression
tests, I would like to propose to cha
Hello folks,
Today, I'm putting the first Centos 7 node in our regression pool.
slave28.cloud.gluster.org -> Shutdown and removed
builder100.cloud.gluster.org -> New Centos7 node (we'll be starting from
100 upwards)
If this run goes well, we'll be replacing the nodes one by one with Centos
7. If
On 01/30/2018 03:23 PM, Shyam Ranganathan wrote:
> Hi Packaging team,
>
> Preparation for RC0 is ongoing, I am waiting on scores for this patch
> [1] (removing experimental code from the release branch) to go through,
> and the following items to be resolved before tagging RC0.
Headline: branch i
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-31-542af571
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman
Hi Atin,
Yes, agree that you explained it repeatedly. Even we are getting this issue
very rarely and that is when we are doing repeated reboot of system.
We tried to debug it further but not able to identify in which
situation/rare case it is generating the empty info file which is causing
this i
I have repeatedly explained this multiple times the way to hit this problem
is *extremely rare* and until and unless you prove us wrong and explain why
do you think you can get into this situation often. I still see that
information is not being made available to us to think through why this fix
is
Hi Team,
I am facing one issue which is exactly same as mentioned on the below link
https://bugzilla.redhat.com/show_bug.cgi?id=1408431
Also there are some patches available to fix the issue but seems those are
not approved and still discussion is going on
https://review.gluster.org/#/c/16279/
Hi all,
GlusterD2 v4.0rc0 has been tagged and a release made in anticipation
of GlusterFS-v4.0rc0. The release and source tarballs are available
from [1].
There aren't any sepcific release-notes for this release.
Thanks.
~kaushal
[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0rc0
__
Hello folks,
We're going to be resizing the supercolony.gluster.org on our cloud
provider. This will definitely lead to a small outage for 5 mins. In the
event that something goes wrong in this process, we're taking a 2-hour
window for this outage.
Date: Feb 21
Server: supercolony.gluster.org
Tim
12 matches
Mail list logo