Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-15 Thread Vijay Bellur
On Wed, Mar 14, 2018 at 9:48 PM, Atin Mukherjee  wrote:

>
>
> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur  wrote:
>
>>
>>
>> On Wed, Mar 14, 2018 at 5:40 PM, Shyam Ranganathan 
>> wrote:
>>
>>> On 03/14/2018 07:04 PM, Joe Julian wrote:
>>> >
>>> >
>>> > On 03/14/2018 02:25 PM, Vijay Bellur wrote:
>>> >>
>>> >>
>>> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY
>>> >> > wrote:
>>> >>
>>> >> On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
>>> >> > On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
>>> >> >>   *
>>> >> >>
>>> >> >> After 4.1, we want to move to either continuous
>>> >> numbering (like
>>> >> >> Fedora), or time based (like ubuntu etc) release
>>> >> numbers. Which
>>> >> >> is the model we pick is not yet finalized. Happy to
>>> >> hear opinions.
>>> >> >>
>>> >> >>
>>> >> >> Not sure how the time based release numbers would make more
>>> >> sense than
>>> >> >> the one which Fedora follows. But before I comment further on
>>> >> this I
>>> >> >> need to first get a clarity on how the op-versions will be
>>> >> managed. I'm
>>> >> >> assuming once we're at GlusterFS 4.1, post that the releases
>>> >> will be
>>> >> >> numbered as GlusterFS5, GlusterFS6 ... So from that
>>> >> perspective, are we
>>> >> >> going to stick to our current numbering scheme of op-version
>>> >> where for
>>> >> >> GlusterFS5 the op-version will be 5?
>>> >> >
>>> >> > Say, yes.
>>> >> >
>>> >> > The question is why tie the op-version to the release number?
>>> That
>>> >> > mental model needs to break IMO.
>>> >> >
>>> >> > With current options like,
>>> >> > https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/
>>> >> 
>>> it is
>>> >> > easier to determine the op-version of the cluster and what it
>>> >> should be,
>>> >> > and hence this need not be tied to the gluster release version.
>>> >> >
>>> >> > Thoughts?
>>> >>
>>> >> I'm okay with that, but——
>>> >>
>>> >> Just to play the Devil's Advocate, having an op-version that bears
>>> >> some
>>> >> resemblance to the _version_ number may make it easy/easier to
>>> >> determine
>>> >> what the op-version ought to be.
>>> >>
>>> >> We aren't going to run out of numbers, so there's no reason to be
>>> >> "efficient" here. Let's try to make it easy. (Easy to not make a
>>> >> mistake.)
>>> >>
>>> >> My 2¢
>>> >>
>>> >>
>>> >> +1 to the overall release cadence change proposal and what Kaleb
>>> >> mentions here.
>>> >>
>>> >> Tying op-versions to release numbers seems like an easier approach
>>> >> than others & one to which we are accustomed to. What are the benefits
>>> >> of breaking this model?
>>> >>
>>> > There is a bit of confusion among the user base when a release happens
>>> > but the op-version doesn't have a commensurate bump. People ask why
>>> they
>>> > can't set the op-version to match the gluster release version they have
>>> > installed. If it was completely disconnected from the release version,
>>> > that might be a great enough mental disconnect that the expectation
>>> > could go away which would actually cause less confusion.
>>>
>>> Above is the reason I state it as well (the breaking of the mental model
>>> around this), why tie it together when it is not totally related. I also
>>> agree that, the notion is present that it is tied together and hence
>>> related, but it may serve us better to break it.
>>>
>>>
>>
>> I see your perspective. Another related reason for not introducing an
>> op-version bump in a new release would be that there are no incompatible
>> features introduced (in the new release). Hence it makes sense to preserve
>> the older op-version.
>>
>> To make everyone's lives simpler, would it be useful to introduce a
>> command that provides the max op-version to release number mapping? The
>> output of the command could look like:
>>
>> op-version X: 3.7.0 to 3.7.11
>> op-version Y: 3.7.12 to x.y.z
>>
>
> We already have introduced an option called cluster.max-op-version where
> one can run a command like "gluster v get all cluster.max-op-version" to
> determine what highest op-version the cluster can be bumped up to. IMO,
> this helps users not to look at the document for at given x.y.z release the
> op-version has to be bumped up to X .  Isn't that sufficient for this
> requirement?
>


I think it is a more elegant solution than what I described.  Do we have a
single interface to determine the current & max op-versions of all members
in the trusted storage pool? If not, it might be an useful enhancement to
add at some point in time.

If we don't hear much complaints about op-version mismatches from users, I

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Pranith Kumar Karampuri
On Wed, Mar 14, 2018 at 8:27 PM, Amye Scavarda  wrote:

> Responding on the architects question:
>
> On Tue, Mar 13, 2018 at 9:57 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Tue, Mar 13, 2018 at 4:26 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Tue, Mar 13, 2018 at 1:51 PM, Amar Tumballi 
>>> wrote:
>>>
 >>
> >> Further, as we hit end of March, we would make it mandatory for
> features
> >> to have required spec and doc labels, before the code is merged, so
> >> factor in efforts for the same if not already done.
> >
> >
> > Could you explain the point above further? Is it just the label or
> the
> > spec/doc
> > that we need merged before the patch is merged?
> >
>
> I'll hazard a guess that the intent of the label is to indicate
> availability of the doc. "Completeness" of code is being defined as
> including specifications and documentation.
>
>
 I believe this has originated from maintainers meeting agreements [1] .
 The proposal to make a spec and documentation mandatory was submitted 3
 months back and is documented, and submitted for comment @
 https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieI
 yiIiRZ-nTEW8CPi7Gbp3g/edit?usp=sharing


>>> Thanks! This clears almost all the doubts I had :).
>>>
>>
>> The document above refers to Architects - "Now Architects are approved
>> to revert a patch which violates by either not having github issue nor
>> bug-id, or uses a bug-id to get the feature in etc."
>>
>> Who are they? What are their responsibilities?
>>
>
> In our last reboot of the maintainers readme file, we expanded the
> architects role:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS
> General Project Architects
> --
> M: Jeff Darcy 
> M: Vijay Bellur 
> P: Amar Tumballi 
> P: Pranith Karampuri 
> P: Raghavendra Gowdappa 
> P: Shyamsundar Ranganathan 
> P: Niels de Vos 
> P: Xavier Hernandez 
> What should we work to make more clear about this?
>

Wow, embarrassing, I am an architect who doesn't know his responsibilities.
Could you let me know where I could find them?


> - amye
>
>
>>
>>
>>>
>>>
 The idea is, if the code is going to be released, it should have
 relevant documentation for users to use it, otherwise, it doesn't matter if
 the feature is present or not. If the feature is 'default', and there is no
 documentation required, just mention it, so the flags can be given. Also,
 if there is no general agreement about the design, it doesn't make sense to
 merge a feature and then someone has to redo things.

 For any experimental code, which we want to publish for other
 developers to test, who doesn't need documentation, we have 'experimental'
 branch, which should be used for validation.

>>>
  [1] - http://lists.gluster.org/pipermail/gluster-devel/2017-Dece
 mber/054070.html

>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>> Pranith
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] More proxy cleanup coming

2018-03-15 Thread Michael Scherer
Hi,

So now we have a new proxy (yes, I am almost as proud of it as the
firewall), I need to move the old service on the old proxy to the new
one. It will imply some time of unavailability, because DNS has latency
to propagate, and we need DNS in place for letsencrypt before
deploying. And we still have DNS issue on the server side that make
change take far more longer than before.

While I can manually do some magic, I rather avoid manual fiddling when
I can, so I would like people to tell how critical are each of theses
domain so I can figure the best way, e.g, can they be down for 10 to 20
minutes for a while, do people want to know some time in advance, etc:

- bits.gluster.org
- ci-logs.gluster.org
- softserve.gluster.org
- fstat.gluster.org

I also plan to move jenkins (so build.gluster.org) on the said proxy,
and the jenkins stage instance, and later move the VM to the internal
network. 

While the stage instance is not a problem, I guess we need to find some
time for the prod one.


-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-03-15-34671430 (master branch)

2018-03-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-03-15-34671430
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-03-15-34671430 (master branch)

2018-03-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-03-15-34671430
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2018-03-15-34671430 (master branch)

2018-03-15 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-03-15-34671430
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Susant Palai
Hi,
Would like to propose Cloudsync Xlator(for Archival use-case) for 4.1.
(github  issue-id #387).
Initial patch (under review) is posted here:
https://review.gluster.org/#/c/18532/.
Spec file: https://review.gluster.org/#/c/18854/

Thanks,
Susant


On Thu, Mar 15, 2018 at 4:05 PM, Ravishankar N 
wrote:

>
>
> On 03/13/2018 07:07 AM, Shyam Ranganathan wrote:
>
>> Hi,
>>
>> As we wind down on 4.0 activities (waiting on docs to hit the site, and
>> packages to be available in CentOS repositories before announcing the
>> release), it is time to start preparing for the 4.1 release.
>>
>> 4.1 is where we have GD2 fully functional and shipping with migration
>> tools to aid Glusterd to GlusterD2 migrations.
>>
>> Other than the above, this is a call out for features that are in the
>> works for 4.1. Please *post* the github issues to the *devel lists* that
>> you would like as a part of 4.1, and also mention the current state of
>> development.
>>
> Hi,
>
> We are targeting the 'thin-arbiter' feature for 4.1 :
> https://github.com/gluster/glusterfs/issues/352
> Status: High level design is there in the github issue.
> Thin arbiter xlator patch https://review.gluster.org/#/c/19545/ is
> undergoing reviews.
> Implementation details on AFR and glusterd(2) related changes are being
> discussed.  Will make sure all patches are posted against issue 352.
>
> Thanks,
> Ravi
>
>
>
>> Further, as we hit end of March, we would make it mandatory for features
>> to have required spec and doc labels, before the code is merged, so
>> factor in efforts for the same if not already done.
>>
>> Current 4.1 project release lane is empty! I cleaned it up, because I
>> want to hear from all as to what content to add, than add things marked
>> with the 4.1 milestone by default.
>>
>> Thanks,
>> Shyam
>> P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-15 Thread Shyam Ranganathan
On 03/15/2018 12:48 AM, Atin Mukherjee wrote:
> 
> 
> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur  > wrote:
> 
> 
> 
> On Wed, Mar 14, 2018 at 5:40 PM, Shyam Ranganathan
> > wrote:
> 
> On 03/14/2018 07:04 PM, Joe Julian wrote:
> >
> >
> > On 03/14/2018 02:25 PM, Vijay Bellur wrote:
> >>
> >>
> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY
> >> 
> >> wrote:
> >>
> >>     On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
> >>     > On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
> >>     >>       *
> >>     >>
> >>     >>         After 4.1, we want to move to either continuous
> >>     numbering (like
> >>     >>         Fedora), or time based (like ubuntu etc) release
> >>     numbers. Which
> >>     >>         is the model we pick is not yet finalized.
> Happy to
> >>     hear opinions.
> >>     >>
> >>     >>
> >>     >> Not sure how the time based release numbers would make
> more
> >>     sense than
> >>     >> the one which Fedora follows. But before I comment
> further on
> >>     this I
> >>     >> need to first get a clarity on how the op-versions will be
> >>     managed. I'm
> >>     >> assuming once we're at GlusterFS 4.1, post that the
> releases
> >>     will be
> >>     >> numbered as GlusterFS5, GlusterFS6 ... So from that
> >>     perspective, are we
> >>     >> going to stick to our current numbering scheme of
> op-version
> >>     where for
> >>     >> GlusterFS5 the op-version will be 5?
> >>     >
> >>     > Say, yes.
> >>     >
> >>     > The question is why tie the op-version to the release
> number? That
> >>     > mental model needs to break IMO.
> >>     >
> >>     > With current options like,
> >>     >
> https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/
> 
> >>   
>   >
> it is
> >>     > easier to determine the op-version of the cluster and
> what it
> >>     should be,
> >>     > and hence this need not be tied to the gluster release
> version.
> >>     >
> >>     > Thoughts?
> >>
> >>     I'm okay with that, but——
> >>
> >>     Just to play the Devil's Advocate, having an op-version
> that bears
> >>     some
> >>     resemblance to the _version_ number may make it
> easy/easier to
> >>     determine
> >>     what the op-version ought to be.
> >>
> >>     We aren't going to run out of numbers, so there's no
> reason to be
> >>     "efficient" here. Let's try to make it easy. (Easy to not
> make a
> >>     mistake.)
> >>
> >>     My 2¢
> >>
> >>
> >> +1 to the overall release cadence change proposal and what Kaleb
> >> mentions here. 
> >>
> >> Tying op-versions to release numbers seems like an easier
> approach
> >> than others & one to which we are accustomed to. What are the
> benefits
> >> of breaking this model?
> >>
> > There is a bit of confusion among the user base when a release
> happens
> > but the op-version doesn't have a commensurate bump. People
> ask why they
> > can't set the op-version to match the gluster release version
> they have
> > installed. If it was completely disconnected from the release
> version,
> > that might be a great enough mental disconnect that the
> expectation
> > could go away which would actually cause less confusion.
> 
> Above is the reason I state it as well (the breaking of the
> mental model
> around this), why tie it together when it is not totally
> related. I also
> agree that, the notion is present that it is tied together and hence
> related, but it may serve us better to break it.
> 
> 
> 
> I see your perspective. Another related reason for not introducing
> an op-version bump in a new release would be that there are no
> incompatible features introduced (in the new release). Hence it
> makes sense to preserve the older op-version.
> 
> To make 

Re: [Gluster-devel] Release 4.1: LTM release targeted for end of May

2018-03-15 Thread Ravishankar N



On 03/13/2018 07:07 AM, Shyam Ranganathan wrote:

Hi,

As we wind down on 4.0 activities (waiting on docs to hit the site, and
packages to be available in CentOS repositories before announcing the
release), it is time to start preparing for the 4.1 release.

4.1 is where we have GD2 fully functional and shipping with migration
tools to aid Glusterd to GlusterD2 migrations.

Other than the above, this is a call out for features that are in the
works for 4.1. Please *post* the github issues to the *devel lists* that
you would like as a part of 4.1, and also mention the current state of
development.

Hi,

We are targeting the 'thin-arbiter' feature for 4.1 
:https://github.com/gluster/glusterfs/issues/352

Status: High level design is there in the github issue.
Thin arbiter xlator patch https://review.gluster.org/#/c/19545/ is 
undergoing reviews.
Implementation details on AFR and glusterd(2) related changes are being 
discussed.  Will make sure all patches are posted against issue 352.


Thanks,
Ravi



Further, as we hit end of March, we would make it mandatory for features
to have required spec and doc labels, before the code is merged, so
factor in efforts for the same if not already done.

Current 4.1 project release lane is empty! I cleaned it up, because I
want to hear from all as to what content to add, than add things marked
with the 4.1 milestone by default.

Thanks,
Shyam
P.S: Also any volunteers to shadow/participate/run 4.1 as a release owner?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] ./tests/basic/mount-nfs-auth.t spews out warnings

2018-03-15 Thread Raghavendra G
I assume we build with --enable-gnfs turned on. Unlikely that its not, but
just pointing it out as initially I ran into some failures due to gNFS
server not coming due to lake of relevant libraries.

On Thu, Mar 15, 2018 at 3:46 PM, Raghavendra Gowdappa 
wrote:

> Providing more links to failure:
> https://build.gluster.org/job/experimental-periodic/227/console
> https://build.gluster.org/job/regression-test-burn-in/3836/console
>
>
> On Thu, Mar 15, 2018 at 3:39 PM, Raghavendra Gowdappa  > wrote:
>
>> I can reproduce the failure even without my patch on master. Looks like
>> this test resulted in failures earlier too.
>>
>> [1] http://lists.gluster.org/pipermail/gluster-devel/2015-May/044932.html
>> [2] See mail to gluster-maintainers with subj: "Build failed in Jenkins:
>> regression-test-burn-in #3836"
>>
>> Usual failures are (need not be all of them, but at least few of them):
>> ./tests/basic/mount-nfs-auth.t (Wstat: 0 Tests: 92 Failed: 4)
>>   Failed tests:  22-24, 28
>>
>> If gNFS team can take a look into this it will be helpful.
>>
>> regards,
>> Raghavendra
>>
>> On Wed, Mar 14, 2018 at 3:04 PM, Nigel Babu  wrote:
>>
>>> When the test works it takes less than 60 seconds. If it needs more than
>>> 200 seconds, that means there's an actual issue.
>>>
>>> On Wed, Mar 14, 2018 at 10:16 AM, Raghavendra Gowdappa <
>>> rgowd...@redhat.com> wrote:
>>>
 All,

 I was trying to debug a regression failure [1]. When I ran test locally
 on my laptop, I see some warnings as below:

 ++ gluster --mode=script --wignore volume get patchy nfs.mount-rmtab
 ++ xargs dirname
 ++ awk '/^nfs.mount-rmtab/{print $2}'
 dirname: missing operand
 Try 'dirname --help' for more information.
 + NFSDIR=

 To debug I ran the volume get cmds:

 [root@booradley glusterfs]# gluster volume get patchy nfs.mount-rmtab
 Option  Value

 --  -

 volume get option failed. Check the cli/glusterd log file for more
 details

 [root@booradley glusterfs]# gluster volume set patchy nfs.mount-rmtab
 testdir
 volume set: success

 [root@booradley glusterfs]# gluster volume get patchy nfs.mount-rmtab
 Option  Value

 --  -

 nfs.mount-rmtab testdir


 Does this mean the option value is not set properly in the script? Need
 your help in debugging this.

 @Nigel
 I noticed that test is timing out.

 *20:28:39* ./tests/basic/mount-nfs-auth.t timed out after 200 seconds

 Can this be infra issue where nfs was taking too much time to mount?

 [1] https://build.gluster.org/job/centos7-regression/316/console

 regards,
 Raghavendra

>>>
>>>
>>>
>>> --
>>> nigelb
>>>
>>
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] ./tests/basic/mount-nfs-auth.t spews out warnings

2018-03-15 Thread Raghavendra Gowdappa
Providing more links to failure:
https://build.gluster.org/job/experimental-periodic/227/console
https://build.gluster.org/job/regression-test-burn-in/3836/console


On Thu, Mar 15, 2018 at 3:39 PM, Raghavendra Gowdappa 
wrote:

> I can reproduce the failure even without my patch on master. Looks like
> this test resulted in failures earlier too.
>
> [1] http://lists.gluster.org/pipermail/gluster-devel/2015-May/044932.html
> [2] See mail to gluster-maintainers with subj: "Build failed in Jenkins:
> regression-test-burn-in #3836"
>
> Usual failures are (need not be all of them, but at least few of them):
> ./tests/basic/mount-nfs-auth.t (Wstat: 0 Tests: 92 Failed: 4)
>   Failed tests:  22-24, 28
>
> If gNFS team can take a look into this it will be helpful.
>
> regards,
> Raghavendra
>
> On Wed, Mar 14, 2018 at 3:04 PM, Nigel Babu  wrote:
>
>> When the test works it takes less than 60 seconds. If it needs more than
>> 200 seconds, that means there's an actual issue.
>>
>> On Wed, Mar 14, 2018 at 10:16 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>> All,
>>>
>>> I was trying to debug a regression failure [1]. When I ran test locally
>>> on my laptop, I see some warnings as below:
>>>
>>> ++ gluster --mode=script --wignore volume get patchy nfs.mount-rmtab
>>> ++ xargs dirname
>>> ++ awk '/^nfs.mount-rmtab/{print $2}'
>>> dirname: missing operand
>>> Try 'dirname --help' for more information.
>>> + NFSDIR=
>>>
>>> To debug I ran the volume get cmds:
>>>
>>> [root@booradley glusterfs]# gluster volume get patchy nfs.mount-rmtab
>>> Option  Value
>>>
>>> --  -
>>>
>>> volume get option failed. Check the cli/glusterd log file for more
>>> details
>>>
>>> [root@booradley glusterfs]# gluster volume set patchy nfs.mount-rmtab
>>> testdir
>>> volume set: success
>>>
>>> [root@booradley glusterfs]# gluster volume get patchy nfs.mount-rmtab
>>> Option  Value
>>>
>>> --  -
>>>
>>> nfs.mount-rmtab testdir
>>>
>>>
>>> Does this mean the option value is not set properly in the script? Need
>>> your help in debugging this.
>>>
>>> @Nigel
>>> I noticed that test is timing out.
>>>
>>> *20:28:39* ./tests/basic/mount-nfs-auth.t timed out after 200 seconds
>>>
>>> Can this be infra issue where nfs was taking too much time to mount?
>>>
>>> [1] https://build.gluster.org/job/centos7-regression/316/console
>>>
>>> regards,
>>> Raghavendra
>>>
>>
>>
>>
>> --
>> nigelb
>>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] ./tests/basic/mount-nfs-auth.t spews out warnings

2018-03-15 Thread Raghavendra Gowdappa
I can reproduce the failure even without my patch on master. Looks like
this test resulted in failures earlier too.

[1] http://lists.gluster.org/pipermail/gluster-devel/2015-May/044932.html
[2] See mail to gluster-maintainers with subj: "Build failed in Jenkins:
regression-test-burn-in #3836"

Usual failures are (need not be all of them, but at least few of them):
./tests/basic/mount-nfs-auth.t (Wstat: 0 Tests: 92 Failed: 4)
  Failed tests:  22-24, 28

If gNFS team can take a look into this it will be helpful.

regards,
Raghavendra

On Wed, Mar 14, 2018 at 3:04 PM, Nigel Babu  wrote:

> When the test works it takes less than 60 seconds. If it needs more than
> 200 seconds, that means there's an actual issue.
>
> On Wed, Mar 14, 2018 at 10:16 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com> wrote:
>
>> All,
>>
>> I was trying to debug a regression failure [1]. When I ran test locally
>> on my laptop, I see some warnings as below:
>>
>> ++ gluster --mode=script --wignore volume get patchy nfs.mount-rmtab
>> ++ xargs dirname
>> ++ awk '/^nfs.mount-rmtab/{print $2}'
>> dirname: missing operand
>> Try 'dirname --help' for more information.
>> + NFSDIR=
>>
>> To debug I ran the volume get cmds:
>>
>> [root@booradley glusterfs]# gluster volume get patchy nfs.mount-rmtab
>> Option  Value
>>
>> --  -
>>
>> volume get option failed. Check the cli/glusterd log file for more details
>>
>> [root@booradley glusterfs]# gluster volume set patchy nfs.mount-rmtab
>> testdir
>> volume set: success
>>
>> [root@booradley glusterfs]# gluster volume get patchy nfs.mount-rmtab
>> Option  Value
>>
>> --  -
>>
>> nfs.mount-rmtab testdir
>>
>>
>> Does this mean the option value is not set properly in the script? Need
>> your help in debugging this.
>>
>> @Nigel
>> I noticed that test is timing out.
>>
>> *20:28:39* ./tests/basic/mount-nfs-auth.t timed out after 200 seconds
>>
>> Can this be infra issue where nfs was taking too much time to mount?
>>
>> [1] https://build.gluster.org/job/centos7-regression/316/console
>>
>> regards,
>> Raghavendra
>>
>
>
>
> --
> nigelb
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] gluster-ant is now admin on synced repos

2018-03-15 Thread Nigel Babu
Hello,

If there's a repo that's synced from Gerrit to Github, gluster-ant is now
admin on those repos. This is so that when issues are closed via commit
message, it is closed by the right user (the bot). Rather than the Infra
person who set that repo up.

As always, please file a bug if you notice any problems.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel