Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 4:26 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Tue, Mar 13, 2018 at 1:51 PM, Amar Tumballi > wrote: > >> >> >>> >> Further, as we hit end of March, we would make it mandatory for >>> features >>> >> to have required spec and

[Gluster-devel] ./tests/basic/mount-nfs-auth.t spews out warnings

2018-03-13 Thread Raghavendra Gowdappa
All, I was trying to debug a regression failure [1]. When I ran test locally on my laptop, I see some warnings as below: ++ gluster --mode=script --wignore volume get patchy nfs.mount-rmtab ++ xargs dirname ++ awk '/^nfs.mount-rmtab/{print $2}' dirname: missing operand Try 'dirname --help' for

[Gluster-devel] Announcing Gluster release 4.0.0 (Short Term Maintenance)

2018-03-13 Thread Shyam Ranganathan
The Gluster community celebrates 13 years of development with this latest release, Gluster 4.0. This release enables improved integration with containers, an enhanced user experience, and a next-generation management framework. The 4.0 release helps cloud-native app developers choose Gluster as

[Gluster-devel] Cage internal network lock down

2018-03-13 Thread Michael Scherer
Hi, So, I have been working on tightening the internal network of the gluster community cage part of the world, e.g., all the servers in *.int.rht.gluster.org. That's mostly internal infra servers, and newer non cloud builder, but I plan to later also move gerrit/jenkins and various servers. The

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-13 Thread Kaleb S. KEITHLEY
On 03/12/2018 02:32 PM, Shyam Ranganathan wrote: > On 03/12/2018 10:34 AM, Atin Mukherjee wrote: >> * >> >> After 4.1, we want to move to either continuous numbering (like >> Fedora), or time based (like ubuntu etc) release numbers. Which >> is the model we pick is

[Gluster-devel] ./tests/bugs/bug-1110262.t is a bad test susceptible to failures due to race-conditions

2018-03-13 Thread Raghavendra Gowdappa
All, This test does: 1. mount a volume 2. kill a brick in the volume 3. mkdir (/somedir) In my local tests and in [1], I see that mkdir in step 3 fails because there is no dht-layout on root directory. The reason I think is by the time first lookup on "/" hit dht, a brick was killed as per

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Shyam Ranganathan
On 03/13/2018 03:53 AM, Sankarshan Mukhopadhyay wrote: > On Tue, Mar 13, 2018 at 1:05 PM, Pranith Kumar Karampuri > wrote: >> >> >> On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan >> wrote: >>> >>> Hi, >>> >>> As we wind down on 4.0 activities

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Amar Tumballi
> > >> > >> Further, as we hit end of March, we would make it mandatory for features > >> to have required spec and doc labels, before the code is merged, so > >> factor in efforts for the same if not already done. > > > > > > Could you explain the point above further? Is it just the label or the

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Sankarshan Mukhopadhyay
On Tue, Mar 13, 2018 at 1:05 PM, Pranith Kumar Karampuri wrote: > > > On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan > wrote: >> >> Hi, >> >> As we wind down on 4.0 activities (waiting on docs to hit the site, and >> packages to be available in

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Pranith Kumar Karampuri
On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan wrote: > Hi, > > As we wind down on 4.0 activities (waiting on docs to hit the site, and > packages to be available in CentOS repositories before announcing the > release), it is time to start preparing for the 4.1 release.

Re: [Gluster-devel] Announcing Softserve- serve yourself a VM

2018-03-13 Thread Nigel Babu
> We’ve enabled certain limits for this application: >> >>1. >> >>Maximum allowance of 5 VM at a time across all the users. User have >>to wait until a slot is available for them after 5 machines allocation. >>2. >> >>User will get the requesting machines maximum upto 4 hours.