Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Raghavendra G
On Fri, Jul 15, 2016 at 1:09 AM, Jeff Darcy  wrote:

> > The feedback I got is, "it is not motivating to review patches that are
> > already merged by maintainer."
>
> I can totally understand that.  I've been pretty active reviewing lately,
> and it's an *awful* demotivating grind.  On the other hand, it's also
> pretty demotivating to see one's own hard work "rot" as the lack of
> reviews forces rebase after rebase.  Haven't we all seen that?  I'm
> sure the magnitude of that effect varies across teams and across parts
> of the code, but I'm equally sure that it affects all of us to some
> degree.
>
>
> > Do you suggest they should change that
> > behaviour in that case?
>
> Maybe.  The fact is that all of our maintainers have plenty of other
> responsibilities, and not all of them prioritize the same way.  I know I
> wouldn't be reviewing so many patches myself otherwise.  If reviews are
> being missed under the current rules, maybe we do need new rules.
>
> > let us give equal recognition for:
> > patches sent
> > patches reviewed - this one is missing.
> > helping users on gluster-users
> > helping users on #gluster/#gluster-dev
> >
> > Feel free to add anything more I might have missed out. May be new
> > ideas/design/big-refactor?
>
> Also doc, infrastructure work, blog/meetup/conference outreach, etc.
>

Bug triage (more generally collecting information from field on how things
are working, what we are lacking)
Testing
Performance measurements
Futuristic thinking (new features, rewrites etc)
Channelizing our collective brain-power and prioritization of things to
work on (roadmap etc)
Handling competition (Defence/Constructive arguments etc).

I know all of this are not easy to measure (and may be out of scope of this
discussion). But, it doesn't hurt to list things needed to make a
project/product successful.


> > let people do what they like more among these and let us also recognize
> them
> > for all their contributions. Let us celebrate their work in each monthly
> > news letter.
>
> Good idea.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Prashanth Pai
Hi,

Here are some observations from how openstack community does things:

1. Their bitregia equivalent named Stackalytics (http://stackalytics.com/) lists
   contribution with *default metric being Reviews*. There are other metrics 
such as
   Commits, Patchsets, Person-day effort etc.
2. They have a common 'Review Dashboard' (a really long gerrit query) that 
everyone
   looks at everyday. This aids in picking up patches that have passed 
regression,
   easier patches to review or important ones starred by community. 
3. A +1 given on the spec design is treated as a gentle promise to review the 
code
   when the feature lands. It's also an ACK from maintainers of those 
components that
   the feature touches upon or modifies.
4. Auto abandon really old patches that has had no activity in recent past and 
send
   reminder mail to author about the patch.
5. Contributors there are more comfortable with any reviewer uploading a new 
patchset
   and then add a co-author line to commit message. This will be taken into 
account
   in calculations of the contribution metric tool.
6. Have hackathons to collectively review code of priority features.

Some other small suggestions:

1. Have coding guideline deviations and styling issues be checked by a Jenkins 
job
   (and make this non-voting) if possible.
2. Personally, I use 'git blame' to identify reviewers to add as they have 
modified
   the same parts of code in the past and it might be easier for them to review 
it.
3. Individual contributors can have personal goals set for themselves such as 
one
   review per day. If this becomes a habit, it'll be great.
4. I see a lot of code that I barely know much about. I go through it but fall
   short of doing a +1 or -1 because of lack of familiarity with it. I'm sure 
there
   are others too who are eager to contribute to reviews but aren't very well
   versed with specific component. It should be okay to put in review comments
   (can be questions too for learning) and not do a +1 or -1.

Cheers.

 -Prashanth Pai

- Original Message -
> From: "Vijay Bellur" <vbel...@redhat.com>
> To: "Jeff Darcy" <jda...@redhat.com>, "Pranith Kumar Karampuri" 
> <pkara...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Friday, 15 July, 2016 9:01:55 AM
> Subject: Re: [Gluster-devel] Reducing merge conflicts
> 
> On 07/14/2016 03:39 PM, Jeff Darcy wrote:
> >> The feedback I got is, "it is not motivating to review patches that are
> >> already merged by maintainer."
> >
> > I can totally understand that.  I've been pretty active reviewing lately,
> > and it's an *awful* demotivating grind.  On the other hand, it's also
> > pretty demotivating to see one's own hard work "rot" as the lack of
> > reviews forces rebase after rebase.  Haven't we all seen that?  I'm
> > sure the magnitude of that effect varies across teams and across parts
> > of the code, but I'm equally sure that it affects all of us to some
> > degree.
> >
> 
> Yes, we need to avoid this kind of rot. We have 500+ open patches across
> all branches today in the review pipeline. Maybe we could clean
> up/abandon some of these patches that are not relevant anymore? Once we
> have that I think it would be useful to have an ageing based policy to
> auto abandon patches. That might help us be more proactive in managing
> our review queues.
> 
> IMO reviewing code is fun and takes some time for one to get better at.
> I have personally imbibed good coding practices, understood something
> better by reading and reviewing code. Several others [1] [2] also
> emphasize the importance of code reviews to both the developer and the
> project. If there is good enough interest on code reviews and how to get
> better at that, some of our more prolific & experienced code reviewers
> can possibly share thoughts over a hangout session?
> 
> 
> >
> >> Do you suggest they should change that
> >> behaviour in that case?
> >
> > Maybe.  The fact is that all of our maintainers have plenty of other
> > responsibilities, and not all of them prioritize the same way.  I know I
> > wouldn't be reviewing so many patches myself otherwise.  If reviews are
> > being missed under the current rules, maybe we do need new rules.
> >
> >> let us give equal recognition for:
> >> patches sent
> >> patches reviewed - this one is missing.
> >> helping users on gluster-users
> >> helping users on #gluster/#gluster-dev
> >>
> >> Feel free to add anything more I might have missed out. May be new
> >> ideas/design/big-refactor?
> >
> > Also doc, infrastructure work, blog/meetup/conference outreach, etc.
> >

Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Vijay Bellur

On 07/14/2016 03:39 PM, Jeff Darcy wrote:

The feedback I got is, "it is not motivating to review patches that are
already merged by maintainer."


I can totally understand that.  I've been pretty active reviewing lately,
and it's an *awful* demotivating grind.  On the other hand, it's also
pretty demotivating to see one's own hard work "rot" as the lack of
reviews forces rebase after rebase.  Haven't we all seen that?  I'm
sure the magnitude of that effect varies across teams and across parts
of the code, but I'm equally sure that it affects all of us to some
degree.



Yes, we need to avoid this kind of rot. We have 500+ open patches across 
all branches today in the review pipeline. Maybe we could clean 
up/abandon some of these patches that are not relevant anymore? Once we 
have that I think it would be useful to have an ageing based policy to 
auto abandon patches. That might help us be more proactive in managing 
our review queues.


IMO reviewing code is fun and takes some time for one to get better at. 
I have personally imbibed good coding practices, understood something 
better by reading and reviewing code. Several others [1] [2] also 
emphasize the importance of code reviews to both the developer and the 
project. If there is good enough interest on code reviews and how to get 
better at that, some of our more prolific & experienced code reviewers 
can possibly share thoughts over a hangout session?






Do you suggest they should change that
behaviour in that case?


Maybe.  The fact is that all of our maintainers have plenty of other
responsibilities, and not all of them prioritize the same way.  I know I
wouldn't be reviewing so many patches myself otherwise.  If reviews are
being missed under the current rules, maybe we do need new rules.


let us give equal recognition for:
patches sent
patches reviewed - this one is missing.
helping users on gluster-users
helping users on #gluster/#gluster-dev

Feel free to add anything more I might have missed out. May be new
ideas/design/big-refactor?


Also doc, infrastructure work, blog/meetup/conference outreach, etc.


let people do what they like more among these and let us also recognize them
for all their contributions. Let us celebrate their work in each monthly
news letter.





Some of these aspects are easy to measure and can be automated. It is 
fairly trivial to generate review statistics using gerrit's gsql 
interface. I even have a script/query lying around somewhere for that 
and can dig it up if somebody is interested in improving that to 
automate the process of generating review statistics.


-Vijay

[1] 
http://goodmath.scientopia.org/2011/07/06/things-everyone-should-do-code-review/


[2] http://users.encs.concordia.ca/~pcr/paper/Rigby2014TOSEM.pdf


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Pranith Kumar Karampuri
On Fri, Jul 15, 2016 at 1:09 AM, Jeff Darcy  wrote:

> > The feedback I got is, "it is not motivating to review patches that are
> > already merged by maintainer."
>
> I can totally understand that.  I've been pretty active reviewing lately,
> and it's an *awful* demotivating grind.  On the other hand, it's also
> pretty demotivating to see one's own hard work "rot" as the lack of
> reviews forces rebase after rebase.  Haven't we all seen that?  I'm
> sure the magnitude of that effect varies across teams and across parts
> of the code, but I'm equally sure that it affects all of us to some
> degree.
>
>
> > Do you suggest they should change that
> > behaviour in that case?
>
> Maybe.  The fact is that all of our maintainers have plenty of other
> responsibilities, and not all of them prioritize the same way.  I know I
> wouldn't be reviewing so many patches myself otherwise.  If reviews are
> being missed under the current rules, maybe we do need new rules.
>
> > let us give equal recognition for:
> > patches sent
> > patches reviewed - this one is missing.
> > helping users on gluster-users
> > helping users on #gluster/#gluster-dev
> >
> > Feel free to add anything more I might have missed out. May be new
> > ideas/design/big-refactor?
>
> Also doc, infrastructure work, blog/meetup/conference outreach, etc.
>
> > let people do what they like more among these and let us also recognize
> them
> > for all their contributions. Let us celebrate their work in each monthly
> > news letter.
>
> Good idea.
>

I think may be we should summarize and come up with action items so that we
can take proper steps to improve things. Or do we have any other things we
need to discuss?

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Pranith Kumar Karampuri
On Fri, Jul 15, 2016 at 1:25 AM, Jeff Darcy  wrote:

> > I absolutely hate what '-1' means though, it says 'I would prefer you
> > didn't submit this'. Somebody who doesn't know what he/she is doing still
> > goes ahead and sends his/her first patch and we say 'I would prefer you
> > didn't submit this'. It is like the tool is working against more
> > contributions. It could also say 'Thanks for your contribution, I feel we
> > can improve the patch further together' on -1 too you know.
>
> When it comes to what -1 means, I've noticed quite a bit of variation
> across the group.  Sometimes it means the person doesn't want it merged
> *yet* because of minor issues (including style).  Sometimes it means they
> think the whole idea or approach is fundamentally misguided and they'll
> need significant convincing before they'll even look at the details.  (I
> tend to use -2 for that, but that's just me.)  It's definitely bad the
> way the message is worded to imply that mere *submission* is unwelcome.
> If Gerrit supports it - sadly I don't think it does - I think we could
> have a much more constructive set of -1 reasons:
>
>  * Needs style or packaging fixes (e.g. missing bug ID).
>
>  * Needs a test.
>
>  * Needs fixes for real bugs found in review.
>
>  * Needs answers/explanations/comments.
>
>  * Needs coordination with other patch .
>
> Alternatively, we could adopt an official set of such reasons as a
> matter of convention, much like we do with including the component
> in the one-line summary.  Would that help?
>

Yes that will help. Are you saying we add it in the comments when we give
'-1'?


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Jeff Darcy
> I absolutely hate what '-1' means though, it says 'I would prefer you
> didn't submit this'. Somebody who doesn't know what he/she is doing still
> goes ahead and sends his/her first patch and we say 'I would prefer you
> didn't submit this'. It is like the tool is working against more
> contributions. It could also say 'Thanks for your contribution, I feel we
> can improve the patch further together' on -1 too you know.

When it comes to what -1 means, I've noticed quite a bit of variation
across the group.  Sometimes it means the person doesn't want it merged
*yet* because of minor issues (including style).  Sometimes it means they
think the whole idea or approach is fundamentally misguided and they'll
need significant convincing before they'll even look at the details.  (I
tend to use -2 for that, but that's just me.)  It's definitely bad the
way the message is worded to imply that mere *submission* is unwelcome.
If Gerrit supports it - sadly I don't think it does - I think we could
have a much more constructive set of -1 reasons:

 * Needs style or packaging fixes (e.g. missing bug ID).

 * Needs a test.

 * Needs fixes for real bugs found in review.

 * Needs answers/explanations/comments.

 * Needs coordination with other patch .

Alternatively, we could adopt an official set of such reasons as a
matter of convention, much like we do with including the component
in the one-line summary.  Would that help?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Jeff Darcy
> The feedback I got is, "it is not motivating to review patches that are
> already merged by maintainer."

I can totally understand that.  I've been pretty active reviewing lately,
and it's an *awful* demotivating grind.  On the other hand, it's also
pretty demotivating to see one's own hard work "rot" as the lack of
reviews forces rebase after rebase.  Haven't we all seen that?  I'm
sure the magnitude of that effect varies across teams and across parts
of the code, but I'm equally sure that it affects all of us to some
degree.


> Do you suggest they should change that
> behaviour in that case?

Maybe.  The fact is that all of our maintainers have plenty of other
responsibilities, and not all of them prioritize the same way.  I know I
wouldn't be reviewing so many patches myself otherwise.  If reviews are
being missed under the current rules, maybe we do need new rules.

> let us give equal recognition for:
> patches sent
> patches reviewed - this one is missing.
> helping users on gluster-users
> helping users on #gluster/#gluster-dev
>
> Feel free to add anything more I might have missed out. May be new
> ideas/design/big-refactor?

Also doc, infrastructure work, blog/meetup/conference outreach, etc.

> let people do what they like more among these and let us also recognize them
> for all their contributions. Let us celebrate their work in each monthly
> news letter.

Good idea.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Pranith Kumar Karampuri
On Thu, Jul 14, 2016 at 11:29 PM, Joe Julian  wrote:

> On 07/07/2016 08:58 PM, Pranith Kumar Karampuri wrote:
>
>
>
> On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy  wrote:
>
>> > What gets measured gets managed.
>>
>> Exactly.  Reviewing is part of everyone's job, but reviews aren't tracked
>> in any way that matters.  Contrast that with the *enormous* pressure most
>> of us are under to get our own patches in, and it's pretty predictable
>> what will happen.  We need to change that calculation.
>>
>>
>> > What I have seen at least is that it is easy to find
>> > people who sent patches, how many patches someone sent in a month etc.
>> There
>> > is no easy way to get these numbers for reviews. 'Reviewed-by' tag in
>> commit
>> > only includes the people who did +1/+2 on the final revision of the
>> patch,
>> > which is bad.
>>
>> That's a very good point.  I think people people who comment also get
>> Reviewed-by: lines, but it doesn't matter because there's still a whole
>> world of things completely outside of Gerrit.  Reviews done by email won't
>> get counted, nor will consultations in the hallway or on IRC.  I have some
>> ideas who's most active in those ways.  Some (such as yourself) show up in
>> the Reviewed-by: statistics.  Others do not.  In terms of making sure
>> people get all the credit they deserve, those things need to be counted
>> too.  However, in terms of *getting the review queue unstuck* I'm not so
>> sure.  What matters for that is the reviews that Gerrit uses to determine
>> merge eligibility, so I think encouraging that specific kind of review
>> still moves us in a positive direction.
>>
>
> In my experience at least it was only adding 'reviewied-by' for the people
> who gave +1/+2 on the final version of the patch
>
> I agree about encouraging specific kind of review. At the same time we
> need to make reviewing, helping users in the community as important as
> sending patches in the eyes of everyone. It is very important to know these
> statistics to move in the right direction. My main problem with this is,
> everyone knows that reviews are important, then why are they not happening?
> Is it really laziness? Are we sure if there are people in the team who are
> not sharing the burden because of which it is becoming too much for 1 or 2
> people to handle the total load? All these things become very easy to
> reason about if we have this data. Then I am sure we can easily find how
> best to solve this issue. Same goes for spurious failures. These are not
> problems that are not faced by others in the world either. I remember
> watching a video where someone shared (I think it was in google) that they
> started putting giant TVs in the hall-way in all the offices and the people
> who don't attend to spurious-build-failure problems would show up on the
> screen for everyone in the world to see. Apparently the guy with the
> biggest picture(the one who was not attending to any build failures at all
> I guess) came to these folks and asked how should he get his picture
> removed from the screen, and it was solved in a day or two. We don't have
> to go to those lengths, but we do need data to nudge people in the right
> direction.
>
>
>
> Perhaps it's imposter syndrome. I know that even when I do leave comments
> on a patch, I don't add a +-1 because I don't think that my vote counts. I
> know I'm not part of the core developers so maybe I'm right, I don't know.
> Maybe some sort of published guidelines or mentorship could help?
>

Well it does count. I agree, some sort of published guidelines definitely
help. I absolutely hate what '-1' means though, it says 'I would prefer you
didn't submit this'. Somebody who doesn't know what he/she is doing still
goes ahead and sends his/her first patch and we say 'I would prefer you
didn't submit this'. It is like the tool is working against more
contributions. It could also say 'Thanks for your contribution, I feel we
can improve the patch further together' on -1 too you know.


>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Joe Julian

On 07/07/2016 08:58 PM, Pranith Kumar Karampuri wrote:



On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy > wrote:


> What gets measured gets managed.

Exactly.  Reviewing is part of everyone's job, but reviews aren't
tracked
in any way that matters.  Contrast that with the *enormous*
pressure most
of us are under to get our own patches in, and it's pretty predictable
what will happen.  We need to change that calculation.


> What I have seen at least is that it is easy to find
> people who sent patches, how many patches someone sent in a
month etc. There
> is no easy way to get these numbers for reviews. 'Reviewed-by'
tag in commit
> only includes the people who did +1/+2 on the final revision of
the patch,
> which is bad.

That's a very good point.  I think people people who comment also get
Reviewed-by: lines, but it doesn't matter because there's still a
whole
world of things completely outside of Gerrit.  Reviews done by
email won't
get counted, nor will consultations in the hallway or on IRC.  I
have some
ideas who's most active in those ways.  Some (such as yourself)
show up in
the Reviewed-by: statistics.  Others do not.  In terms of making sure
people get all the credit they deserve, those things need to be
counted
too.  However, in terms of *getting the review queue unstuck* I'm
not so
sure.  What matters for that is the reviews that Gerrit uses to
determine
merge eligibility, so I think encouraging that specific kind of review
still moves us in a positive direction.


In my experience at least it was only adding 'reviewied-by' for the 
people who gave +1/+2 on the final version of the patch


I agree about encouraging specific kind of review. At the same time we 
need to make reviewing, helping users in the community as important as 
sending patches in the eyes of everyone. It is very important to know 
these statistics to move in the right direction. My main problem with 
this is, everyone knows that reviews are important, then why are they 
not happening? Is it really laziness? Are we sure if there are people 
in the team who are not sharing the burden because of which it is 
becoming too much for 1 or 2 people to handle the total load? All 
these things become very easy to reason about if we have this data. 
Then I am sure we can easily find how best to solve this issue. Same 
goes for spurious failures. These are not problems that are not faced 
by others in the world either. I remember watching a video where 
someone shared (I think it was in google) that they started putting 
giant TVs in the hall-way in all the offices and the people who don't 
attend to spurious-build-failure problems would show up on the screen 
for everyone in the world to see. Apparently the guy with the biggest 
picture(the one who was not attending to any build failures at all I 
guess) came to these folks and asked how should he get his picture 
removed from the screen, and it was solved in a day or two. We don't 
have to go to those lengths, but we do need data to nudge people in 
the right direction.





Perhaps it's imposter syndrome. I know that even when I do leave 
comments on a patch, I don't add a +-1 because I don't think that my 
vote counts. I know I'm not part of the core developers so maybe I'm 
right, I don't know. Maybe some sort of published guidelines or 
mentorship could help?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-14 Thread Pranith Kumar Karampuri
On Fri, Jul 8, 2016 at 7:27 PM, Jeff Darcy  wrote:

> (combining replies to multiple people)
>
> Pranith:
> > I agree about encouraging specific kind of review. At the same time we
> need
> > to make reviewing, helping users in the community as important as sending
> > patches in the eyes of everyone. It is very important to know these
> > statistics to move in the right direction. My main problem with this is,
> > everyone knows that reviews are important, then why are they not
> happening?
> > Is it really laziness?
>
> "Laziness" was clearly a bad choice of words, for which I apologize.  I
> should have said "lack of diligence" or something to reflect that it's an
> *organizational* rather than personal problem.  We *as a group* have not
> been keeping up with the review workload.  Whatever the reasons are, to
> change the outcome we need to change behavior, and to change behavior we
> need to change the incentives.
>
>
> Raghavendra G:
> > Personally I've found a genuine -1 to be more valuable than a +1. Since
> we
> > are discussing about measuring, how does one measure the issues that are
> > prevented (through a good design, thoughtful coding/review) than the
> issues
> > that are _fixed_?
>
> Another excellent point.  It's easier to see the failures than the
> successes.
> It's a bit like traffic accidents.  Everyone sees when you cause one, but
> not
> when you avoid one.  If a regression occurs, everyone can look back to see
> who the author and reviewers were.  If there's no regression ... what then?
> Pranith has suggested some mechanism to give credit/karma in cases where it
> can't be done automatically.  Meta-review (review of reviews) is another
> possibility.  I've seen it work in other contexts, but I'm not sure how to
> apply it here.
>
> > Measuring -1s and -2s along with +1s and +2s can be a good
> > place to start with (though as with many measurements, they may not
> reflect
> > the underlying value accurately).
>
> The danger here is that we'll incentivize giving a -1 for superficial
> reasons.  We don't need more patches blocked because a reviewer doesn't
> like a file/variable name, or wants to play "I know a better way" games.
> Unfortunately, it's hard to distinguish those from enforcing standards
> that really matter, or avoiding technical debt.  I guess that brings us
> back to manual overrides and/or meta-review.
>
>
> Poornima:
> > Below are the few things that we can do to reduce our review backlog:
> > - No time for maintainers to review is not a good enough reason to bitrot
> > patches in review for months, it clearly means we need additional
> > maintainers for that component?
> > - Add maintainers for every component that is in Gluster(atleast the ones
> > which have incoming patches)
> > - For every patch we submit we add 'component(s)' label, and evaluate if
> > gerrit can automatically add maintainers as reviewers, and have another
> > label 'Maintainers ack' which needs to be present for any patch to be
> > merged.
>
> Excellent points.  Not much to add here, except that we also need a way to
> deal with patches that cross many components (as many of yours and mine
> do).
> If getting approval from one maintainer is a problem, getting approval from
> several will be worse.  Maybe it's enough to say that approval by one of
> those several maintainers is sufficient, and to rely on maintainers talking
> to one another.
>
>
> Atin:
> > How about having "review marathon" once a week by every team? In past
> this
> > has worked well and I don't see any reason why can't we spend 3-4 hours
> in a
> > meeting on weekly basis to review incoming patches on the component that
> the
> > team owns.
>
> I love this idea.  If I may add to it, I suggest that such "marathons" are
> a
> good way not only to reduce the backlog but also to teach people how to
> review well.  Reviewing's a skill, learnable like any other.  In addition
> to
> improving review quantity, getting reviews more focused on real bugs and
> technical debt would be great.
>
>
> Pranith (again):
> > Everyone in the team started reviewing the patches and giving +1 and I am
> > reviewing only after a +1.
>
> In the past I've done this myself (as a project-level maintainer), so I
> totally understand the motivation, but I'm still ambivalent about whether
> it's a good idea.  On the one hand, it seems like projects bigger than
> ours essentially work this way.  For example, how often does Linus review
> something that hasn't already been reviewed by one of his lieutenants?
> Not often, it seems.  On the other hand, reinforcing such hierarchies in
> the review process is counter to our goal of breaking them down in a more
> general sense.  I hope some day we can get to the point where people are
> actively seeking out things to review, instead of actively filtering the
> list they already have.
>

The feedback I got is, "it is not motivating to review patches that are
already merged by maintainer." 

Re: [Gluster-devel] Reducing merge conflicts

2016-07-08 Thread Jeff Darcy
(combining replies to multiple people)

Pranith:
> I agree about encouraging specific kind of review. At the same time we need
> to make reviewing, helping users in the community as important as sending
> patches in the eyes of everyone. It is very important to know these
> statistics to move in the right direction. My main problem with this is,
> everyone knows that reviews are important, then why are they not happening?
> Is it really laziness?

"Laziness" was clearly a bad choice of words, for which I apologize.  I
should have said "lack of diligence" or something to reflect that it's an
*organizational* rather than personal problem.  We *as a group* have not
been keeping up with the review workload.  Whatever the reasons are, to
change the outcome we need to change behavior, and to change behavior we
need to change the incentives.


Raghavendra G:
> Personally I've found a genuine -1 to be more valuable than a +1. Since we
> are discussing about measuring, how does one measure the issues that are
> prevented (through a good design, thoughtful coding/review) than the issues
> that are _fixed_?

Another excellent point.  It's easier to see the failures than the successes.
It's a bit like traffic accidents.  Everyone sees when you cause one, but not
when you avoid one.  If a regression occurs, everyone can look back to see
who the author and reviewers were.  If there's no regression ... what then?
Pranith has suggested some mechanism to give credit/karma in cases where it
can't be done automatically.  Meta-review (review of reviews) is another
possibility.  I've seen it work in other contexts, but I'm not sure how to
apply it here.

> Measuring -1s and -2s along with +1s and +2s can be a good
> place to start with (though as with many measurements, they may not reflect
> the underlying value accurately).

The danger here is that we'll incentivize giving a -1 for superficial
reasons.  We don't need more patches blocked because a reviewer doesn't
like a file/variable name, or wants to play "I know a better way" games.
Unfortunately, it's hard to distinguish those from enforcing standards
that really matter, or avoiding technical debt.  I guess that brings us
back to manual overrides and/or meta-review.


Poornima:
> Below are the few things that we can do to reduce our review backlog:
> - No time for maintainers to review is not a good enough reason to bitrot
> patches in review for months, it clearly means we need additional
> maintainers for that component?
> - Add maintainers for every component that is in Gluster(atleast the ones
> which have incoming patches)
> - For every patch we submit we add 'component(s)' label, and evaluate if
> gerrit can automatically add maintainers as reviewers, and have another
> label 'Maintainers ack' which needs to be present for any patch to be
> merged.

Excellent points.  Not much to add here, except that we also need a way to
deal with patches that cross many components (as many of yours and mine do).
If getting approval from one maintainer is a problem, getting approval from
several will be worse.  Maybe it's enough to say that approval by one of
those several maintainers is sufficient, and to rely on maintainers talking
to one another.


Atin:
> How about having "review marathon" once a week by every team? In past this
> has worked well and I don't see any reason why can't we spend 3-4 hours in a
> meeting on weekly basis to review incoming patches on the component that the
> team owns.

I love this idea.  If I may add to it, I suggest that such "marathons" are a
good way not only to reduce the backlog but also to teach people how to
review well.  Reviewing's a skill, learnable like any other.  In addition to
improving review quantity, getting reviews more focused on real bugs and
technical debt would be great.


Pranith (again):
> Everyone in the team started reviewing the patches and giving +1 and I am
> reviewing only after a +1.

In the past I've done this myself (as a project-level maintainer), so I
totally understand the motivation, but I'm still ambivalent about whether
it's a good idea.  On the one hand, it seems like projects bigger than
ours essentially work this way.  For example, how often does Linus review
something that hasn't already been reviewed by one of his lieutenants?
Not often, it seems.  On the other hand, reinforcing such hierarchies in
the review process is counter to our goal of breaking them down in a more
general sense.  I hope some day we can get to the point where people are
actively seeking out things to review, instead of actively filtering the
list they already have.


It's great to see such an energetic discussion.  I know it's already
the weekend for everyone I've just replied to, but I hope we can keep
the discussion going next week.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing merge conflicts

2016-07-08 Thread Pranith Kumar Karampuri
On Fri, Jul 8, 2016 at 11:23 AM, Poornima Gurusiddaiah <pguru...@redhat.com>
wrote:

>
> Completely agree with your concern here. Keeping aside the regression
> part, few observations and suggestions:
> As per the Maintainers guidelines (
> https://gluster.readthedocs.io/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
> ):
>
> a> Merge patches of owned components only.
> b> Seek approvals from all maintainers before merging a patchset
> spanning multiple components.
> c> Ensure that regression tests pass for all patches before merging.
> d> Ensure that regression tests accompany all patch submissions.
> e> Ensure that documentation is updated for a noticeable change in
> user perceivable behavior or design.
> f> Encourage code unit tests from patch submitters to improve the
> overall quality of the codebase.
> g> Not merge patches written by themselves until there is a +2 Code
> Review vote by other reviewers.
>
> Clearly a, b, are not being strictly followed, because of multiple reasons.
> - Not every component in Gluster has a Maintainer
>

We need to fix this. Do you want to take up the task of coming up with a
list?


> - Its getting difficult to get review time from maintainers as they are
> maintainers for several component, and they are also active developers.
>

Is it your experience that the patch is not at all getting a single review
or there are no other people who can review? In my experience even when
there were other people who could do the reviews people want to lean on
maintainers to do the initial reviews because they would find most of the
problems in the first review. I am guilty of leaning on the main
maintainers too :-(. If this happens others in the team won't improve in
finding issues in reviewing others'/their own patches. Did you guys already
solve this problem in the components you are working on? What are you guys
doing for improving in reviews/get more participation? In our team both
Krutika and Ravi frequent top-10 people who send patches per month, so it
was too much for 1 maintainer to take this kind of load. Everyone in the
team started reviewing the patches and giving +1 and I am reviewing only
after a +1. It still feels a bit skewed though.

- What is enforced by mere documentation of procedure, is hard to implement.
>
> Below are the few things that we can do to reduce our review backlog:
> - No time for maintainers to review is not a good enough reason to bitrot
> patches in review for months, it clearly means we need additional
> maintainers for that component?

- Add maintainers for every component that is in Gluster(atleast the ones
> which have incoming patches)
> - For every patch we submit we add 'component(s)' label, and evaluate if
> gerrit can automaticallyIn our team both Krutika and Ravi frequent top-10
> people who send patches per month, so it was too much for 1 maintainer to
> take this kind of load. Everyone in the team started reviewing the patches
> and giving +1 and I am reviewing only after a +1. My hope is this will lead
> to faster patch acceptance over time. add maintainers as reviewers, and
> have another label 'Maintainers ack' which needs to be present for any
> patch to be merged.
> - Before every major(or minor also?) release, any patch that is not making
> to the release should have a '-1' by the maintainer or the developer
> themselves stating the reason(preferably not no time to review).
>   The release manager should ensure that there are no patches in below
> gerrit search link provided by Jeff.
>
> Any thoughts?
>

I am in favour of more people knowing more components in the
stack(preferably more projects). What I have seen from my experience is
that you would be able to come up with solutions fast because you would
have seen the problem solved in different ways in these different
components/projects. Reviewing is one way to gain more knowledge about a
different component. Ashish surprises me with his reviews sometimes even
when he doesn't know much about the component he is reviewing. So how can
we encourage more people to pick up new components? Do you have any ideas?
Getting more reviews will be a very small problem if we have more
knowledgeable people per component.


> Regards,
> Poornima
>
> - Original Message -
> > From: "Jeff Darcy" <jda...@redhat.com>
> > To: "Gluster Devel" <gluster-devel@gluster.org>
> > Sent: Friday, July 8, 2016 2:02:27 AM
> > Subject: [Gluster-devel] Reducing merge conflicts
> >
> > I'm sure a lot of you are pretty frustrated with how long it can take to
> get
> > even a trivial patch through our Gerrit/Jenkins pipeline.  I know I am.
> > Slow tests, spurious failures, and bikeshedding over style issues a

Re: [Gluster-devel] Reducing merge conflicts

2016-07-08 Thread Nigel Babu
On Fri, Jul 8, 2016 at 11:40 AM, Atin Mukherjee <amukh...@redhat.com> wrote:

> How about having "review marathon" once a week by every team? In past this
> has worked well and I don't see any reason why can't we spend 3-4 hours in
> a meeting on weekly basis to review incoming patches on the component that
> the team owns.
>
> On Fri, Jul 8, 2016 at 11:23 AM, Poornima Gurusiddaiah <
> pguru...@redhat.com> wrote:
>
>>
>> Completely agree with your concern here. Keeping aside the regression
>> part, few observations and suggestions:
>> As per the Maintainers guidelines (
>> https://gluster.readthedocs.io/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
>> ):
>>
>> a> Merge patches of owned components only.
>> b> Seek approvals from all maintainers before merging a patchset
>> spanning multiple components.
>> c> Ensure that regression tests pass for all patches before merging.
>> d> Ensure that regression tests accompany all patch submissions.
>> e> Ensure that documentation is updated for a noticeable change in
>> user perceivable behavior or design.
>> f> Encourage code unit tests from patch submitters to improve the
>> overall quality of the codebase.
>> g> Not merge patches written by themselves until there is a +2 Code
>> Review vote by other reviewers.
>>
>> Clearly a, b, are not being strictly followed, because of multiple
>> reasons.
>> - Not every component in Gluster has a Maintainer
>> - Its getting difficult to get review time from maintainers as they are
>> maintainers for several component, and they are also active developers.
>> - What is enforced by mere documentation of procedure, is hard to
>> implement.
>>
>> Below are the few things that we can do to reduce our review backlog:
>> - No time for maintainers to review is not a good enough reason to bitrot
>> patches in review for months, it clearly means we need additional
>> maintainers for that component?
>> - Add maintainers for every component that is in Gluster(atleast the ones
>> which have incoming patches)
>> - For every patch we submit we add 'component(s)' label, and evaluate if
>> gerrit can automatically add maintainers as reviewers, and have another
>> label 'Maintainers ack' which needs to be present for any patch to be
>> merged.
>>
>
> I believe this is something which Nigel is already working on.
>
>
>> - Before every major(or minor also?) release, any patch that is not
>> making to the release should have a '-1' by the maintainer or the developer
>> themselves stating the reason(preferably not no time to review).
>>
>
> IMO, it should be the other way around, if the fix/RFE is a must for the
> upcoming release, it should be attached to the tracker bug to ensure
> release is blocked with out the patch. Having a -1 just for not targeting
> it for a specific release doesn't make sense to me.
>
>
>>   The release manager should ensure that there are no patches in below
>> gerrit search link provided by Jeff.
>>
>
>> Any thoughts?
>>
>> Regards,
>> Poornima
>>
>> - Original Message -
>> > From: "Jeff Darcy" <jda...@redhat.com>
>> > To: "Gluster Devel" <gluster-devel@gluster.org>
>> > Sent: Friday, July 8, 2016 2:02:27 AM
>> > Subject: [Gluster-devel] Reducing merge conflicts
>> >
>> > I'm sure a lot of you are pretty frustrated with how long it can take
>> to get
>> > even a trivial patch through our Gerrit/Jenkins pipeline.  I know I am.
>> > Slow tests, spurious failures, and bikeshedding over style issues are
>> all
>> > contributing factors.  I'm not here to talk about those today.  What I
>> am
>> > here to talk about is the difficulty of getting somebody - anybody - to
>> look
>> > at a patch and (possibly) give it the votes it needs to be merged.  To
>> put
>> > it bluntly, laziness here is *killing* us.  The more patches we have in
>> > flight, the more merge conflicts and rebases we have to endure for each
>> one.
>> > It's a quadratic effect.  That's why I personally have been trying
>> really
>> > hard to get patches that have passed all regression tests and haven't
>> gotten
>> > any other review attention "across the finish line" so they can be
>> merged
>> > and removed from conflict with every other patch still in flight.  The
>> > search I use for this, every day, is as follows:
>> >
>> >
>&

Re: [Gluster-devel] Reducing merge conflicts

2016-07-08 Thread Atin Mukherjee
How about having "review marathon" once a week by every team? In past this
has worked well and I don't see any reason why can't we spend 3-4 hours in
a meeting on weekly basis to review incoming patches on the component that
the team owns.

On Fri, Jul 8, 2016 at 11:23 AM, Poornima Gurusiddaiah <pguru...@redhat.com>
wrote:

>
> Completely agree with your concern here. Keeping aside the regression
> part, few observations and suggestions:
> As per the Maintainers guidelines (
> https://gluster.readthedocs.io/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
> ):
>
> a> Merge patches of owned components only.
> b> Seek approvals from all maintainers before merging a patchset
> spanning multiple components.
> c> Ensure that regression tests pass for all patches before merging.
> d> Ensure that regression tests accompany all patch submissions.
> e> Ensure that documentation is updated for a noticeable change in
> user perceivable behavior or design.
> f> Encourage code unit tests from patch submitters to improve the
> overall quality of the codebase.
> g> Not merge patches written by themselves until there is a +2 Code
> Review vote by other reviewers.
>
> Clearly a, b, are not being strictly followed, because of multiple reasons.
> - Not every component in Gluster has a Maintainer
> - Its getting difficult to get review time from maintainers as they are
> maintainers for several component, and they are also active developers.
> - What is enforced by mere documentation of procedure, is hard to
> implement.
>
> Below are the few things that we can do to reduce our review backlog:
> - No time for maintainers to review is not a good enough reason to bitrot
> patches in review for months, it clearly means we need additional
> maintainers for that component?
> - Add maintainers for every component that is in Gluster(atleast the ones
> which have incoming patches)
> - For every patch we submit we add 'component(s)' label, and evaluate if
> gerrit can automatically add maintainers as reviewers, and have another
> label 'Maintainers ack' which needs to be present for any patch to be
> merged.
>

I believe this is something which Nigel is already working on.


> - Before every major(or minor also?) release, any patch that is not making
> to the release should have a '-1' by the maintainer or the developer
> themselves stating the reason(preferably not no time to review).
>

IMO, it should be the other way around, if the fix/RFE is a must for the
upcoming release, it should be attached to the tracker bug to ensure
release is blocked with out the patch. Having a -1 just for not targeting
it for a specific release doesn't make sense to me.


>   The release manager should ensure that there are no patches in below
> gerrit search link provided by Jeff.
>

> Any thoughts?
>
> Regards,
> Poornima
>
> - Original Message -
> > From: "Jeff Darcy" <jda...@redhat.com>
> > To: "Gluster Devel" <gluster-devel@gluster.org>
> > Sent: Friday, July 8, 2016 2:02:27 AM
> > Subject: [Gluster-devel] Reducing merge conflicts
> >
> > I'm sure a lot of you are pretty frustrated with how long it can take to
> get
> > even a trivial patch through our Gerrit/Jenkins pipeline.  I know I am.
> > Slow tests, spurious failures, and bikeshedding over style issues are all
> > contributing factors.  I'm not here to talk about those today.  What I am
> > here to talk about is the difficulty of getting somebody - anybody - to
> look
> > at a patch and (possibly) give it the votes it needs to be merged.  To
> put
> > it bluntly, laziness here is *killing* us.  The more patches we have in
> > flight, the more merge conflicts and rebases we have to endure for each
> one.
> > It's a quadratic effect.  That's why I personally have been trying really
> > hard to get patches that have passed all regression tests and haven't
> gotten
> > any other review attention "across the finish line" so they can be merged
> > and removed from conflict with every other patch still in flight.  The
> > search I use for this, every day, is as follows:
> >
> >
> http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+label:CentOS-regression%253E0+label:NetBSD-regression%253E0+-label:Code-Review%253C0
> >
> > That is:
> >
> > open patches on glusterfs master (change project/branch as
> appropriate to
> > your role)
> >
> > CentOS and NetBSD regression tests complete
> >
> > no -1 or -2 votes which might represent legitimate cause for delay
> >
> > If other people - especiall

Re: [Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Poornima Gurusiddaiah

Completely agree with your concern here. Keeping aside the regression part, few 
observations and suggestions:
As per the Maintainers guidelines 
(https://gluster.readthedocs.io/en/latest/Contributors-Guide/Guidelines-For-Maintainers/):

a> Merge patches of owned components only.
b> Seek approvals from all maintainers before merging a patchset spanning 
multiple components.
c> Ensure that regression tests pass for all patches before merging.
d> Ensure that regression tests accompany all patch submissions.
e> Ensure that documentation is updated for a noticeable change in user 
perceivable behavior or design.
f> Encourage code unit tests from patch submitters to improve the overall 
quality of the codebase.
g> Not merge patches written by themselves until there is a +2 Code Review 
vote by other reviewers.

Clearly a, b, are not being strictly followed, because of multiple reasons.
- Not every component in Gluster has a Maintainer
- Its getting difficult to get review time from maintainers as they are 
maintainers for several component, and they are also active developers.
- What is enforced by mere documentation of procedure, is hard to implement.

Below are the few things that we can do to reduce our review backlog:
- No time for maintainers to review is not a good enough reason to bitrot 
patches in review for months, it clearly means we need additional maintainers 
for that component?
- Add maintainers for every component that is in Gluster(atleast the ones which 
have incoming patches)
- For every patch we submit we add 'component(s)' label, and evaluate if gerrit 
can automatically add maintainers as reviewers, and have another label 
'Maintainers ack' which needs to be present for any patch to be merged.
- Before every major(or minor also?) release, any patch that is not making to 
the release should have a '-1' by the maintainer or the developer themselves 
stating the reason(preferably not no time to review).
  The release manager should ensure that there are no patches in below gerrit 
search link provided by Jeff.

Any thoughts?

Regards,
Poornima

- Original Message -
> From: "Jeff Darcy" <jda...@redhat.com>
> To: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Friday, July 8, 2016 2:02:27 AM
> Subject: [Gluster-devel] Reducing merge conflicts
> 
> I'm sure a lot of you are pretty frustrated with how long it can take to get
> even a trivial patch through our Gerrit/Jenkins pipeline.  I know I am.
> Slow tests, spurious failures, and bikeshedding over style issues are all
> contributing factors.  I'm not here to talk about those today.  What I am
> here to talk about is the difficulty of getting somebody - anybody - to look
> at a patch and (possibly) give it the votes it needs to be merged.  To put
> it bluntly, laziness here is *killing* us.  The more patches we have in
> flight, the more merge conflicts and rebases we have to endure for each one.
> It's a quadratic effect.  That's why I personally have been trying really
> hard to get patches that have passed all regression tests and haven't gotten
> any other review attention "across the finish line" so they can be merged
> and removed from conflict with every other patch still in flight.  The
> search I use for this, every day, is as follows:
> 
> 
> http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+label:CentOS-regression%253E0+label:NetBSD-regression%253E0+-label:Code-Review%253C0
> 
> That is:
> 
> open patches on glusterfs master (change project/branch as appropriate to
> your role)
>  
> CentOS and NetBSD regression tests complete
> 
> no -1 or -2 votes which might represent legitimate cause for delay
> 
> If other people - especially team leads and release managers - could make a
> similar habit of checking the queue and helping to get such "low hanging
> fruit" out of the way, we might see an appreciable increase in our overall
> pace of development.  If not, we might have to start talking about mandatory
> reviews with deadlines and penalties for non-compliance.  I'm sure nobody
> wants to see their own patches blocked and their own deadlines missed
> because they weren't doing their part to review peers' work, but that's a
> distinct possibility.  Let's all try to get this train unstuck and back on
> track before extreme measures become necessary.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Raghavendra Gowdappa


- Original Message -
> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> To: "Jeff Darcy" <jda...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>
> Sent: Friday, July 8, 2016 9:28:57 AM
> Subject: Re: [Gluster-devel] Reducing merge conflicts
> 
> 
> 
> On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy < jda...@redhat.com > wrote:
> 
> 
> > What gets measured gets managed.
> 
> Exactly. Reviewing is part of everyone's job, but reviews aren't tracked
> in any way that matters. Contrast that with the *enormous* pressure most
> of us are under to get our own patches in, and it's pretty predictable
> what will happen. We need to change that calculation.

+1.

> 
> 
> > What I have seen at least is that it is easy to find
> > people who sent patches, how many patches someone sent in a month etc.
> > There
> > is no easy way to get these numbers for reviews. 'Reviewed-by' tag in
> > commit
> > only includes the people who did +1/+2 on the final revision of the patch,
> > which is bad.
> 
> That's a very good point. I think people people who comment also get
> Reviewed-by: lines, but it doesn't matter because there's still a whole
> world of things completely outside of Gerrit. Reviews done by email won't
> get counted, nor will consultations in the hallway or on IRC. I have some
> ideas who's most active in those ways. Some (such as yourself) show up in
> the Reviewed-by: statistics. Others do not. In terms of making sure
> people get all the credit they deserve, those things need to be counted
> too. However, in terms of *getting the review queue unstuck* I'm not so
> sure. What matters for that is the reviews that Gerrit uses to determine
> merge eligibility, so I think encouraging that specific kind of review
> still moves us in a positive direction.
> 
> In my experience at least it was only adding 'reviewied-by' for the people
> who gave +1/+2 on the final version of the patch
> 
> I agree about encouraging specific kind of review. At the same time we need
> to make reviewing, helping users in the community as important as sending
> patches in the eyes of everyone. It is very important to know these
> statistics to move in the right direction. My main problem with this is,
> everyone knows that reviews are important, then why are they not happening?
> Is it really laziness?

I think its more of 
1. priorities and incentives rather than laziness. Of course, maintainers have 
visibility on the component and kind of have a default responsibility (not the 
only one) - and incentive (and again not the only one) to at least review the 
patch. As a developer, there are no such ready incentives (Of course one gains 
more insight into the component etc, but that's not really a strong incentive - 
and expertise takes time to build up - given the other pressures we are all 
subjected to).
2. Its also a factor of competency (you know more about code/design, you find 
the issues, propose solutions and the satisfaction adds up as a feedback loop).
3. Another factor is accountability. If there is a direct accountability we 
tend to prioritize the things over others. However reviewing as of now is a 
_shared_ responsibility.

> Are we sure if there are people in the team who are
> not sharing the burden because of which it is becoming too much for 1 or 2
> people to handle the total load? All these things become very easy to reason
> about if we have this data. 

Personally I've found a genuine -1 to be more valuable than a +1. Since we are 
discussing about measuring, how does one measure the issues that are prevented 
(through a good design, thoughtful coding/review) than the issues that are 
_fixed_? Measuring -1s and -2s along with +1s and +2s can be a good place to 
start with (though as with many measurements, they may not reflect the 
underlying value accurately).

> Then I am sure we can easily find how best to
> solve this issue. Same goes for spurious failures. 

May be test cases are not subjected to strong scrutiny as the accompanying 
code. If one is time-pressed to choose between code and the test-case, I think 
I'll prioritize code over test-case (same with sending patches).

> These are not problems
> that are not faced by others in the world either. I remember watching a
> video where someone shared (I think it was in google) that they started
> putting giant TVs in the hall-way in all the offices and the people who
> don't attend to spurious-build-failure problems would show up on the screen
> for everyone in the world to see. Apparently the guy with the biggest
> picture(the one who was not attending to any build failures at all I guess)
> came to these folks and asked how should he get his picture removed from the
>

Re: [Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Pranith Kumar Karampuri
On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy  wrote:

> > What gets measured gets managed.
>
> Exactly.  Reviewing is part of everyone's job, but reviews aren't tracked
> in any way that matters.  Contrast that with the *enormous* pressure most
> of us are under to get our own patches in, and it's pretty predictable
> what will happen.  We need to change that calculation.
>
>
> > What I have seen at least is that it is easy to find
> > people who sent patches, how many patches someone sent in a month etc.
> There
> > is no easy way to get these numbers for reviews. 'Reviewed-by' tag in
> commit
> > only includes the people who did +1/+2 on the final revision of the
> patch,
> > which is bad.
>
> That's a very good point.  I think people people who comment also get
> Reviewed-by: lines, but it doesn't matter because there's still a whole
> world of things completely outside of Gerrit.  Reviews done by email won't
> get counted, nor will consultations in the hallway or on IRC.  I have some
> ideas who's most active in those ways.  Some (such as yourself) show up in
> the Reviewed-by: statistics.  Others do not.  In terms of making sure
> people get all the credit they deserve, those things need to be counted
> too.  However, in terms of *getting the review queue unstuck* I'm not so
> sure.  What matters for that is the reviews that Gerrit uses to determine
> merge eligibility, so I think encouraging that specific kind of review
> still moves us in a positive direction.
>

In my experience at least it was only adding 'reviewied-by' for the people
who gave +1/+2 on the final version of the patch

I agree about encouraging specific kind of review. At the same time we need
to make reviewing, helping users in the community as important as sending
patches in the eyes of everyone. It is very important to know these
statistics to move in the right direction. My main problem with this is,
everyone knows that reviews are important, then why are they not happening?
Is it really laziness? Are we sure if there are people in the team who are
not sharing the burden because of which it is becoming too much for 1 or 2
people to handle the total load? All these things become very easy to
reason about if we have this data. Then I am sure we can easily find how
best to solve this issue. Same goes for spurious failures. These are not
problems that are not faced by others in the world either. I remember
watching a video where someone shared (I think it was in google) that they
started putting giant TVs in the hall-way in all the offices and the people
who don't attend to spurious-build-failure problems would show up on the
screen for everyone in the world to see. Apparently the guy with the
biggest picture(the one who was not attending to any build failures at all
I guess) came to these folks and asked how should he get his picture
removed from the screen, and it was solved in a day or two. We don't have
to go to those lengths, but we do need data to nudge people in the right
direction.


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Pranith Kumar Karampuri
+Nigel

On Fri, Jul 8, 2016 at 7:42 AM, Pranith Kumar Karampuri  wrote:

> What gets measured gets managed. It is good that you started this thread.
> Problem is two fold. We need a way to first find people who are reviewing a
> lot and give them more karma points in the community by encouraging that
> behaviour(making these stats known to public lets say in monthly news
> letter is one way). It is equally important to review patches when you
> compare it to sending patches. What I have seen at least is that it is easy
> to find people who sent patches, how many patches someone sent in a month
> etc. There is no easy way to get these numbers for reviews. 'Reviewed-by'
> tag in commit only includes the people who did +1/+2 on the final revision
> of the patch, which is bad. So I feel that is the first problem to be
> solved if we have to get better at this. Once I know how I am doing on a
> regular basis in this aspect I am sure I will change my ways to contribute
> better in this aspect. I would love to know what others think about this
> too.
>

Would it be possible for you to get this data using some script may be? I
think we do have apis?


>
> On Fri, Jul 8, 2016 at 2:02 AM, Jeff Darcy  wrote:
>
>> I'm sure a lot of you are pretty frustrated with how long it can take to
>> get even a trivial patch through our Gerrit/Jenkins pipeline.  I know I
>> am.  Slow tests, spurious failures, and bikeshedding over style issues are
>> all contributing factors.  I'm not here to talk about those today.  What I
>> am here to talk about is the difficulty of getting somebody - anybody - to
>> look at a patch and (possibly) give it the votes it needs to be merged.  To
>> put it bluntly, laziness here is *killing* us.  The more patches we have in
>> flight, the more merge conflicts and rebases we have to endure for each
>> one.  It's a quadratic effect.  That's why I personally have been trying
>> really hard to get patches that have passed all regression tests and
>> haven't gotten any other review attention "across the finish line" so they
>> can be merged and removed from conflict with every other patch still in
>> flight.  The search I use for this, every day, is as follows:
>>
>>
>> http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+label:CentOS-regression%253E0+label:NetBSD-regression%253E0+-label:Code-Review%253C0
>>
>> That is:
>>
>> open patches on glusterfs master (change project/branch as
>> appropriate to your role)
>>
>> CentOS and NetBSD regression tests complete
>>
>> no -1 or -2 votes which might represent legitimate cause for delay
>>
>> If other people - especially team leads and release managers - could make
>> a similar habit of checking the queue and helping to get such "low hanging
>> fruit" out of the way, we might see an appreciable increase in our overall
>> pace of development.  If not, we might have to start talking about
>> mandatory reviews with deadlines and penalties for non-compliance.  I'm
>> sure nobody wants to see their own patches blocked and their own deadlines
>> missed because they weren't doing their part to review peers' work, but
>> that's a distinct possibility.  Let's all try to get this train unstuck and
>> back on track before extreme measures become necessary.
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Jeff Darcy
> What gets measured gets managed.

Exactly.  Reviewing is part of everyone's job, but reviews aren't tracked
in any way that matters.  Contrast that with the *enormous* pressure most
of us are under to get our own patches in, and it's pretty predictable
what will happen.  We need to change that calculation.


> What I have seen at least is that it is easy to find
> people who sent patches, how many patches someone sent in a month etc. There
> is no easy way to get these numbers for reviews. 'Reviewed-by' tag in commit
> only includes the people who did +1/+2 on the final revision of the patch,
> which is bad.

That's a very good point.  I think people people who comment also get
Reviewed-by: lines, but it doesn't matter because there's still a whole
world of things completely outside of Gerrit.  Reviews done by email won't
get counted, nor will consultations in the hallway or on IRC.  I have some
ideas who's most active in those ways.  Some (such as yourself) show up in
the Reviewed-by: statistics.  Others do not.  In terms of making sure
people get all the credit they deserve, those things need to be counted
too.  However, in terms of *getting the review queue unstuck* I'm not so
sure.  What matters for that is the reviews that Gerrit uses to determine
merge eligibility, so I think encouraging that specific kind of review
still moves us in a positive direction.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Pranith Kumar Karampuri
What gets measured gets managed. It is good that you started this thread.
Problem is two fold. We need a way to first find people who are reviewing a
lot and give them more karma points in the community by encouraging that
behaviour(making these stats known to public lets say in monthly news
letter is one way). It is equally important to review patches when you
compare it to sending patches. What I have seen at least is that it is easy
to find people who sent patches, how many patches someone sent in a month
etc. There is no easy way to get these numbers for reviews. 'Reviewed-by'
tag in commit only includes the people who did +1/+2 on the final revision
of the patch, which is bad. So I feel that is the first problem to be
solved if we have to get better at this. Once I know how I am doing on a
regular basis in this aspect I am sure I will change my ways to contribute
better in this aspect. I would love to know what others think about this
too.

On Fri, Jul 8, 2016 at 2:02 AM, Jeff Darcy  wrote:

> I'm sure a lot of you are pretty frustrated with how long it can take to
> get even a trivial patch through our Gerrit/Jenkins pipeline.  I know I
> am.  Slow tests, spurious failures, and bikeshedding over style issues are
> all contributing factors.  I'm not here to talk about those today.  What I
> am here to talk about is the difficulty of getting somebody - anybody - to
> look at a patch and (possibly) give it the votes it needs to be merged.  To
> put it bluntly, laziness here is *killing* us.  The more patches we have in
> flight, the more merge conflicts and rebases we have to endure for each
> one.  It's a quadratic effect.  That's why I personally have been trying
> really hard to get patches that have passed all regression tests and
> haven't gotten any other review attention "across the finish line" so they
> can be merged and removed from conflict with every other patch still in
> flight.  The search I use for this, every day, is as follows:
>
>
> http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+label:CentOS-regression%253E0+label:NetBSD-regression%253E0+-label:Code-Review%253C0
>
> That is:
>
> open patches on glusterfs master (change project/branch as appropriate
> to your role)
>
> CentOS and NetBSD regression tests complete
>
> no -1 or -2 votes which might represent legitimate cause for delay
>
> If other people - especially team leads and release managers - could make
> a similar habit of checking the queue and helping to get such "low hanging
> fruit" out of the way, we might see an appreciable increase in our overall
> pace of development.  If not, we might have to start talking about
> mandatory reviews with deadlines and penalties for non-compliance.  I'm
> sure nobody wants to see their own patches blocked and their own deadlines
> missed because they weren't doing their part to review peers' work, but
> that's a distinct possibility.  Let's all try to get this train unstuck and
> back on track before extreme measures become necessary.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Reducing merge conflicts

2016-07-07 Thread Jeff Darcy
I'm sure a lot of you are pretty frustrated with how long it can take to get 
even a trivial patch through our Gerrit/Jenkins pipeline.  I know I am.  Slow 
tests, spurious failures, and bikeshedding over style issues are all 
contributing factors.  I'm not here to talk about those today.  What I am here 
to talk about is the difficulty of getting somebody - anybody - to look at a 
patch and (possibly) give it the votes it needs to be merged.  To put it 
bluntly, laziness here is *killing* us.  The more patches we have in flight, 
the more merge conflicts and rebases we have to endure for each one.  It's a 
quadratic effect.  That's why I personally have been trying really hard to get 
patches that have passed all regression tests and haven't gotten any other 
review attention "across the finish line" so they can be merged and removed 
from conflict with every other patch still in flight.  The search I use for 
this, every day, is as follows:


http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+label:CentOS-regression%253E0+label:NetBSD-regression%253E0+-label:Code-Review%253C0

That is:

open patches on glusterfs master (change project/branch as appropriate to 
your role)
 
CentOS and NetBSD regression tests complete

no -1 or -2 votes which might represent legitimate cause for delay

If other people - especially team leads and release managers - could make a 
similar habit of checking the queue and helping to get such "low hanging fruit" 
out of the way, we might see an appreciable increase in our overall pace of 
development.  If not, we might have to start talking about mandatory reviews 
with deadlines and penalties for non-compliance.  I'm sure nobody wants to see 
their own patches blocked and their own deadlines missed because they weren't 
doing their part to review peers' work, but that's a distinct possibility.  
Let's all try to get this train unstuck and back on track before extreme 
measures become necessary.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel