1. I agree that the bar for patches going in should be very high: there's 
always the risk of some subtle regression. The more patches, the higher the 
risk, the more traumatic the update

2. I like the idea of having a list of proposed candidate patches, all of which 
can be reviewed and discussed before going in. 

> On 16 Jul 2015, at 02:43, Vinod Kumar Vavilapalli <vino...@hortonworks.com> 
> wrote:
> 
> https://issues.apache.org/jira/issues/?jql=labels%20%3D%202.6.1-candidate<https://issues.apache.org/jira/issues/?jql=labels%20=%202.6.1-candidate>


Link is 
https://issues.apache.org/jira/browse/YARN-3575?jql=labels%20%3D%202.6.1-candidate

3. Maybe we should have some guidelines of what isn't going to get in except in 
very, very special cases

-any change to classpath/dependencies
-any change to the signature of an API, including exception types & text
-changes to wire formats

4. We could also consider driving patches based on those that downstream 
redistributors of Hadoop felt were important enough to backport. That's 
cloudera as well as us, Amazon if they filed JIRAs, Microsoft, + others. 
Ideally patches that have been tested and released, so there's a high chance 
regressions would have surfaced already.

5. Then there's the "these broke HBase changes"; vinod already has HADOOP-11710 
in there, as an example.

6. And of course, any security issue patch should go in.

Overall then: the expectation should be that patches won't go in by default, 
unless viewed as critical. We have to be ruthless, and people shouldn't commit 
things without getting approval from others.

-Steve

Reply via email to