Re: [Gluster-devel] gluster volume stop and the regressions

2018-01-31 Thread Nithya Balachandran
On 1 February 2018 at 09:46, Milind Changire wrote: > If a *volume stop* fails at a user's production site with a reason like > *rebalance session is active* then the admin will wait for the session to > complete and then reissue a *volume stop*; > > So, in essence, the failed volume stop is not

Re: [Gluster-devel] Release 3.12.5: Scheduled for the 12th of January

2018-01-31 Thread Jiffin Tony Thottan
The glusterfs 3.12.5 got released on Jan 12th 2018. Apologies for not sending the announcement mail on time Release notes for the release can be found at [4]. We still carry following major issue that is reported in the release-notes as follows, 1.) - Expanding a gluster volume that is shard

Re: [Gluster-devel] gluster volume stop and the regressions

2018-01-31 Thread Atin Mukherjee
I don't think that's the right way. Ideally the test shouldn't be attempting to stop a volume if rebalance session is in progress. If we do see such a situation even with we check for rebalance status and wait till it finishes for 30 secs and still volume stop fails with rebalance session in progre

[Gluster-devel] gluster volume stop and the regressions

2018-01-31 Thread Milind Changire
If a *volume stop* fails at a user's production site with a reason like *rebalance session is active* then the admin will wait for the session to complete and then reissue a *volume stop*; So, in essence, the failed volume stop is not fatal; for the regression tests, I would like to propose to cha

[Gluster-devel] Replacing Centos 6 nodes with Centos 7

2018-01-31 Thread Nigel Babu
Hello folks, Today, I'm putting the first Centos 7 node in our regression pool. slave28.cloud.gluster.org -> Shutdown and removed builder100.cloud.gluster.org -> New Centos7 node (we'll be starting from 100 upwards) If this run goes well, we'll be replacing the nodes one by one with Centos 7. If

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC0 build tomorrow (at risk)

2018-01-31 Thread Shyam Ranganathan
On 01/30/2018 03:23 PM, Shyam Ranganathan wrote: > Hi Packaging team, > > Preparation for RC0 is ongoing, I am waiting on scores for this patch > [1] (removing experimental code from the release branch) to go through, > and the following items to be resolved before tagging RC0. Headline: branch i

[Gluster-devel] Coverity covscan for 2018-01-31-542af571 (master branch)

2018-01-31 Thread staticanalysis
GlusterFS Coverity covscan results are available from http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-31-542af571 ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman

Re: [Gluster-devel] [Gluster-users] Status of the patch!!!

2018-01-31 Thread ABHISHEK PALIWAL
Hi Atin, Yes, agree that you explained it repeatedly. Even we are getting this issue very rarely and that is when we are doing repeated reboot of system. We tried to debug it further but not able to identify in which situation/rare case it is generating the empty info file which is causing this i

Re: [Gluster-devel] [Gluster-users] Status of the patch!!!

2018-01-31 Thread Atin Mukherjee
I have repeatedly explained this multiple times the way to hit this problem is *extremely rare* and until and unless you prove us wrong and explain why do you think you can get into this situation often. I still see that information is not being made available to us to think through why this fix is

[Gluster-devel] Status of the patch!!!

2018-01-31 Thread ABHISHEK PALIWAL
Hi Team, I am facing one issue which is exactly same as mentioned on the below link https://bugzilla.redhat.com/show_bug.cgi?id=1408431 Also there are some patches available to fix the issue but seems those are not approved and still discussion is going on https://review.gluster.org/#/c/16279/

[Gluster-devel] GlusterD2 v4.0rc0 tagged

2018-01-31 Thread Kaushal M
Hi all, GlusterD2 v4.0rc0 has been tagged and a release made in anticipation of GlusterFS-v4.0rc0. The release and source tarballs are available from [1]. There aren't any sepcific release-notes for this release. Thanks. ~kaushal [1]: https://github.com/gluster/glusterd2/releases/tag/v4.0rc0 __

[Gluster-devel] Planned Outage: supercolony.gluster.org on 2018-02-21

2018-01-31 Thread Nigel Babu
Hello folks, We're going to be resizing the supercolony.gluster.org on our cloud provider. This will definitely lead to a small outage for 5 mins. In the event that something goes wrong in this process, we're taking a 2-hour window for this outage. Date: Feb 21 Server: supercolony.gluster.org Tim