Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Karthik Kambatla
I would like to understand the trunk-incompat part of the proposal a little
better.

Is trunk-incompat always going to be a superset of trunk? If yes, is it
just a change in naming convention with a hope that our approach to trunk
stability changes as Sangjin mentioned?

Or, is it okay for trunk-incompat to be based off of an older commit in
trunk with (in)frequent rebases? This has the risk of incompatible changes
truly rotting. Periodic rebases will ensure these changes don't rot while
also easing the burden of hosting two branches; if we choose this route,
some guidance of the period and who rebases will be nice.

On Fri, Jun 10, 2016 at 5:11 PM, Andrew Wang 
wrote:

> Let me try to clarify a few points, since not everyone might have been
> present for the previous emails.
>
> On the "Looking to a Hadoop 3 release" thread, we already reached
> consensus on doing releases from trunk. People didn't want to have to
> commit to another branch, and wanted to try releasing from trunk. The
> question, then, was how to ensure that trunk remains stable and releasable.
>
> Part of Vinod's proposal was that we, as a community, be more judicious
> about what we commit to trunk, and try to make use of more feature branches
> for larger efforts. There was no requirement that 1-2 patch changes go
> through a feature branch. There weren't any requirements around # of
> patches or length of development at all, just asking that committers be
> more judicious. I personally think Sangjin's rule of thumb of ~12 patches
> or ~1 month are about right, but it's up to the developers who are
> involved, and I doubt any one standard will fit all situations.
>
> So, this is about as low-overhead a policy there is: devs, please be
> careful when committing to trunk, and consider using a feature branch for
> bigger efforts.
>
> If you have further ideas about how to improve stability of trunk, I'd
> love to hear it. I'd hope though that the above would be a
> non-controversial statement.
>
> Best,
> Andrew
>
> On Fri, Jun 10, 2016 at 2:10 PM, Sangjin Lee  wrote:
>
>> Thanks for your thoughts Anu.
>>
>> Regarding your question
>>
>>> And then comes the question, once 3.0 becomes official, where do we
>>> check-in a change,  if that would break something? so this will lead us
>>> back to trunk being the unstable – 3.0 being the new “branch-2”.
>>
>>
>> Andrew mentioned in the original email
>>
>>> Regarding "trunk-incompat", since we're still in the alpha stage for
>>> 3.0.0, there's no need for this branch yet. This aspect of Vinod's proposal
>>> was still under a bit of discussion; Chris Douglas though we should cut a
>>> branch-3 for the first 3.0.0 beta, which aligns with my original thinking.
>>> This point doesn't necessarily need to be resolved now though, since again
>>> we're still doing alphas.
>>
>>
>> and I agree with that sentiment. I think even if we have a
>> "trunk-incompat" branch to hold future incompatible changes, the situation
>> will change little from today. Instead of dealing with "trunk" (where
>> incompatible changes may appear) and "branch-3", we would be dealing with
>> "trunk-incompat" and "trunk". Names are largely mnemonics then.
>>
>>
>> On Fri, Jun 10, 2016 at 12:37 PM, Anu Engineer > > wrote:
>>
>>> I actively work on two branches (Diskbalancer and ozone) and I agree
>>> with most of what Sangjin said.
>>> There is an overhead in working with branches, there are both technical
>>> costs and administrative issues
>>> which discourages developers from using branches.
>>>
>>> I think the biggest issue with branch based development is that fact
>>> that other developers do not use a branch.
>>> If a small feature appears as a series of commits to “”datanode.java””,
>>> the branch based developer ends up rebasing
>>> and paying this price of rebasing many times. If everyone followed a
>>> model of branch + Pull request, other branches
>>> would not have to deal with continues rebasing to trunk commits. If we
>>> are moving to a branch based
>>> development, we should probably move to that model for most development
>>> to avoid this tax on people who
>>>  actually end up working in the branches.
>>>
>>> I do have a question in my mind though: What is being proposed is that
>>> we move active development to branches
>>> if the feature is small or incomplete, however keep the trunk open for
>>> check-ins. One of the biggest reason why we
>>> check-in into trunk and not to branch-2 is because it is a change that
>>> will break backward compatibility. So do we
>>> have an expectation of backward compatibility thru the 3.0-alpha series
>>> (I personally vote No, since 3.0 is experimental
>>> at this stage), but if we decide to support some sort of
>>> backward-compact then willy-nilly committing to trunk
>>> and still maintaining the expectation we can release Alphas from 3.0
>>> does not look possible.
>>>
>>> And then comes the question, 

[jira] [Created] (YARN-5240) TestSystemMetricsPublisher.testPublishApplicationMetrics fails in trunk

2016-06-10 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5240:
---

 Summary: TestSystemMetricsPublisher.testPublishApplicationMetrics 
fails in trunk
 Key: YARN-5240
 URL: https://issues.apache.org/jira/browse/YARN-5240
 Project: Hadoop YARN
  Issue Type: Test
  Components: t, test
Reporter: Rohith Sharma K S


In the build 
[link|https://builds.apache.org/job/PreCommit-YARN-Build/11975/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt]
 test case failed. 
{noformat}
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.554 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
testPublishApplicationMetrics(org.apache.hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher)
  Time elapsed: 2.206 sec  <<< FAILURE!
java.lang.AssertionError: expected:<> but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher.testPublishApplicationMetrics(TestSystemMetricsPublisher.java:201)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5239) Eliminate unused imports checkstyle warnings

2016-06-10 Thread Joep Rottinghuis (JIRA)
Joep Rottinghuis created YARN-5239:
--

 Summary: Eliminate unused imports checkstyle warnings
 Key: YARN-5239
 URL: https://issues.apache.org/jira/browse/YARN-5239
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: YARN-2928
Reporter: Joep Rottinghuis
Assignee: Joep Rottinghuis
Priority: Trivial


There are ~8 existing checkstyle issues generated due to unused imports.
By fully qualifying the classes in javadoc and manually wrapping the javadoc 
under 80 characters we can eliminate these warnings.
This will help with the eventual merge because then we introduce 8 fewer 
checkstyle warnings.

The only checkstyle warnings left are now too many arguments, which cannot be 
easily refactored w/o changing code structure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5238) Handle enforceExecutionType == false in AMRMClient

2016-06-10 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5238:
-

 Summary: Handle enforceExecutionType == false in AMRMClient 
 Key: YARN-5238
 URL: https://issues.apache.org/jira/browse/YARN-5238
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Arun Suresh


Currently only *enforceExecutionType == true* is supported. To support *false*, 
The {{RemoteRequestTable#addResourceRequest}} used by the AMRMClientImpl should 
be modified to something like :
{noformat}
  if (!execTypeReq.getEnforceExecutionType()) {
put(priority, resourceName,
execTypeReq.getExecutionType(), capability, resourceRequestInfo);
  } else {
for (ExecutionType eType : ExecutionType.values()) {
  put(priority, resourceName, eType,
  capability, resourceRequestInfo);
}
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Andrew Wang
Let me try to clarify a few points, since not everyone might have been
present for the previous emails.

On the "Looking to a Hadoop 3 release" thread, we already reached consensus
on doing releases from trunk. People didn't want to have to commit to
another branch, and wanted to try releasing from trunk. The question, then,
was how to ensure that trunk remains stable and releasable.

Part of Vinod's proposal was that we, as a community, be more judicious
about what we commit to trunk, and try to make use of more feature branches
for larger efforts. There was no requirement that 1-2 patch changes go
through a feature branch. There weren't any requirements around # of
patches or length of development at all, just asking that committers be
more judicious. I personally think Sangjin's rule of thumb of ~12 patches
or ~1 month are about right, but it's up to the developers who are
involved, and I doubt any one standard will fit all situations.

So, this is about as low-overhead a policy there is: devs, please be
careful when committing to trunk, and consider using a feature branch for
bigger efforts.

If you have further ideas about how to improve stability of trunk, I'd love
to hear it. I'd hope though that the above would be a non-controversial
statement.

Best,
Andrew

On Fri, Jun 10, 2016 at 2:10 PM, Sangjin Lee  wrote:

> Thanks for your thoughts Anu.
>
> Regarding your question
>
>> And then comes the question, once 3.0 becomes official, where do we
>> check-in a change,  if that would break something? so this will lead us
>> back to trunk being the unstable – 3.0 being the new “branch-2”.
>
>
> Andrew mentioned in the original email
>
>> Regarding "trunk-incompat", since we're still in the alpha stage for
>> 3.0.0, there's no need for this branch yet. This aspect of Vinod's proposal
>> was still under a bit of discussion; Chris Douglas though we should cut a
>> branch-3 for the first 3.0.0 beta, which aligns with my original thinking.
>> This point doesn't necessarily need to be resolved now though, since again
>> we're still doing alphas.
>
>
> and I agree with that sentiment. I think even if we have a
> "trunk-incompat" branch to hold future incompatible changes, the situation
> will change little from today. Instead of dealing with "trunk" (where
> incompatible changes may appear) and "branch-3", we would be dealing with
> "trunk-incompat" and "trunk". Names are largely mnemonics then.
>
>
> On Fri, Jun 10, 2016 at 12:37 PM, Anu Engineer 
> wrote:
>
>> I actively work on two branches (Diskbalancer and ozone) and I agree with
>> most of what Sangjin said.
>> There is an overhead in working with branches, there are both technical
>> costs and administrative issues
>> which discourages developers from using branches.
>>
>> I think the biggest issue with branch based development is that fact that
>> other developers do not use a branch.
>> If a small feature appears as a series of commits to “”datanode.java””,
>> the branch based developer ends up rebasing
>> and paying this price of rebasing many times. If everyone followed a
>> model of branch + Pull request, other branches
>> would not have to deal with continues rebasing to trunk commits. If we
>> are moving to a branch based
>> development, we should probably move to that model for most development
>> to avoid this tax on people who
>>  actually end up working in the branches.
>>
>> I do have a question in my mind though: What is being proposed is that we
>> move active development to branches
>> if the feature is small or incomplete, however keep the trunk open for
>> check-ins. One of the biggest reason why we
>> check-in into trunk and not to branch-2 is because it is a change that
>> will break backward compatibility. So do we
>> have an expectation of backward compatibility thru the 3.0-alpha series
>> (I personally vote No, since 3.0 is experimental
>> at this stage), but if we decide to support some sort of backward-compact
>> then willy-nilly committing to trunk
>> and still maintaining the expectation we can release Alphas from 3.0 does
>> not look possible.
>>
>> And then comes the question, once 3.0 becomes official, where do we
>> check-in a change,  if that would break something?
>> so this will lead us back to trunk being the unstable – 3.0 being the new
>> “branch-2”.
>>
>> One more point: If we are moving to use a branch always – then we are
>> looking at a model similar to using a git + pull
>> request model. If that is so would it make sense to modify the rules to
>> make these branches easier to merge?
>> Say for example, if all commits in a branch has followed review and
>> checking policy – just like trunk and commits
>> have been made only after a sign off from a committer, would it be
>> possible to merge with a 3-day voting period
>> instead of 7, or treat it just like today’s commit to trunk – but with 2
>> people signing-off?
>>
>> What I am suggesting is reducing the 

[jira] [Created] (YARN-5237) Not all logs get aggregated with rolling log aggregation.

2016-06-10 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-5237:
---

 Summary: Not all logs get aggregated with rolling log aggregation.
 Key: YARN-5237
 URL: https://issues.apache.org/jira/browse/YARN-5237
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong


Steps to reproduce:
1) enable RM recovery
2) Run a sleep job
3) restart RM
4) kill the application
We can not find that the logs for the first attempt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5236) FlowRunCoprocessor brings down HBase RegionServer

2016-06-10 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-5236:


 Summary: FlowRunCoprocessor brings down HBase RegionServer
 Key: YARN-5236
 URL: https://issues.apache.org/jira/browse/YARN-5236
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Reporter: Haibo Chen


The FlowRunCoprocessor, when loaded in HBase, will bring down the region server 
with exception

java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment.getRegion()

I am running it with HBase 1.2.1 in pseudo-distributed mode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Sangjin Lee
Thanks for your thoughts Anu.

Regarding your question

> And then comes the question, once 3.0 becomes official, where do we
> check-in a change,  if that would break something? so this will lead us
> back to trunk being the unstable – 3.0 being the new “branch-2”.


Andrew mentioned in the original email

> Regarding "trunk-incompat", since we're still in the alpha stage for
> 3.0.0, there's no need for this branch yet. This aspect of Vinod's proposal
> was still under a bit of discussion; Chris Douglas though we should cut a
> branch-3 for the first 3.0.0 beta, which aligns with my original thinking.
> This point doesn't necessarily need to be resolved now though, since again
> we're still doing alphas.


and I agree with that sentiment. I think even if we have a "trunk-incompat"
branch to hold future incompatible changes, the situation will change
little from today. Instead of dealing with "trunk" (where incompatible
changes may appear) and "branch-3", we would be dealing with
"trunk-incompat" and "trunk". Names are largely mnemonics then.


On Fri, Jun 10, 2016 at 12:37 PM, Anu Engineer 
wrote:

> I actively work on two branches (Diskbalancer and ozone) and I agree with
> most of what Sangjin said.
> There is an overhead in working with branches, there are both technical
> costs and administrative issues
> which discourages developers from using branches.
>
> I think the biggest issue with branch based development is that fact that
> other developers do not use a branch.
> If a small feature appears as a series of commits to “”datanode.java””,
> the branch based developer ends up rebasing
> and paying this price of rebasing many times. If everyone followed a model
> of branch + Pull request, other branches
> would not have to deal with continues rebasing to trunk commits. If we are
> moving to a branch based
> development, we should probably move to that model for most development to
> avoid this tax on people who
>  actually end up working in the branches.
>
> I do have a question in my mind though: What is being proposed is that we
> move active development to branches
> if the feature is small or incomplete, however keep the trunk open for
> check-ins. One of the biggest reason why we
> check-in into trunk and not to branch-2 is because it is a change that
> will break backward compatibility. So do we
> have an expectation of backward compatibility thru the 3.0-alpha series (I
> personally vote No, since 3.0 is experimental
> at this stage), but if we decide to support some sort of backward-compact
> then willy-nilly committing to trunk
> and still maintaining the expectation we can release Alphas from 3.0 does
> not look possible.
>
> And then comes the question, once 3.0 becomes official, where do we
> check-in a change,  if that would break something?
> so this will lead us back to trunk being the unstable – 3.0 being the new
> “branch-2”.
>
> One more point: If we are moving to use a branch always – then we are
> looking at a model similar to using a git + pull
> request model. If that is so would it make sense to modify the rules to
> make these branches easier to merge?
> Say for example, if all commits in a branch has followed review and
> checking policy – just like trunk and commits
> have been made only after a sign off from a committer, would it be
> possible to merge with a 3-day voting period
> instead of 7, or treat it just like today’s commit to trunk – but with 2
> people signing-off?
>
> What I am suggesting is reducing the administrative overheads of using a
> branch to encourage use of branching.
> Right now it feels like Apache’s process encourages committing directly to
> trunk than a branch
>
> Thanks
> Anu
>
>
> On 6/10/16, 10:50 AM, "sjl...@gmail.com on behalf of Sangjin Lee" <
> sjl...@gmail.com on behalf of sj...@apache.org> wrote:
>
> >Having worked on a major feature in a feature branch, I have some thoughts
> >and observations on feature branch development.
> >
> >IMO feature branch development v. direct commits to trunk in piecemeal is
> >really a choice of *granularity*. Do we want a series of fine-grained
> state
> >changes on trunk or fewer coarse-grained chunks of commits on trunk?
> >
> >This makes me favor a branch-based development model for any
> "decent-sized"
> >features (we'll need to define "decent-sized" of course). Once you have
> >coarse-grained changes, it's easier to reason about what made what release
> >and in what state. As importantly, it makes it easier to back out a
> >complete feature fairly easily if that becomes necessary. My totally
> >unscientific suggestion may be if a feature takes more than dozen commits
> >and longer than a month, we should probably have a bias towards a feature
> >branch.
> >
> >Branch-based development also makes you go faster if your feature is
> >larger. I wouldn't do it the other way for timeline service v.2 for
> example.
> >
> >That said, feature branches don't come for free. Now the 

[jira] [Created] (YARN-5235) Avoid re-creation of EvenColumnNameConverter in HBaseTimelineWriterImpl#storeEvents

2016-06-10 Thread Joep Rottinghuis (JIRA)
Joep Rottinghuis created YARN-5235:
--

 Summary: Avoid re-creation of EvenColumnNameConverter in 
HBaseTimelineWriterImpl#storeEvents
 Key: YARN-5235
 URL: https://issues.apache.org/jira/browse/YARN-5235
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: YARN-2928
Reporter: Joep Rottinghuis
Assignee: Joep Rottinghuis
Priority: Trivial


As per discussion in YARN-5052 [~varun_saxena] noted:
bq. In HBaseTimelineWriterImpl#storeEvents, we iterate over all events in a 
loop and will be creating EventColumnNameConverter object each time. Although 
its not a very heavy object right now, but can't we just create it once outside 
the loop ?





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Hadoop-Yarn-trunk-Java8 - Build # 1568 - Still Failing

2016-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1568/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 65480 lines...]
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  4.866 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:35 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [03:35 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.556 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 49.960 s]
[INFO] Apache Hadoop YARN NodeManager . FAILURE [11:54 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 21.843 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:13 min]
[INFO] Apache Hadoop YARN ResourceManager . SUCCESS [35:46 min]
[INFO] Apache Hadoop YARN Server Tests  SKIPPED
[INFO] Apache Hadoop YARN Client .. SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.485 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.429 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 53.156 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 58:18 min
[INFO] Finished at: 2016-06-10T19:56:54+00:00
[INFO] Final Memory: 142M/3842M
[INFO] 
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-nodemanager: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-nodemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager.testKillMultipleOpportunisticContainers

Error Message:
ContainerState is not correct (timedout) expected: but 
was:

Stack Trace:
java.lang.AssertionError: ContainerState is not correct (timedout) 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at 

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1568

2016-06-10 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-13213. Small Documentation bug with AuthenticatedURL in

--
[...truncated 65283 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 

Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Anu Engineer
I actively work on two branches (Diskbalancer and ozone) and I agree with most 
of what Sangjin said. 
There is an overhead in working with branches, there are both technical costs 
and administrative issues 
which discourages developers from using branches.

I think the biggest issue with branch based development is that fact that other 
developers do not use a branch.
If a small feature appears as a series of commits to “”datanode.java””, the 
branch based developer ends up rebasing 
and paying this price of rebasing many times. If everyone followed a model of 
branch + Pull request, other branches
would not have to deal with continues rebasing to trunk commits. If we are 
moving to a branch based 
development, we should probably move to that model for most development to 
avoid this tax on people who
 actually end up working in the branches.

I do have a question in my mind though: What is being proposed is that we move 
active development to branches 
if the feature is small or incomplete, however keep the trunk open for 
check-ins. One of the biggest reason why we 
check-in into trunk and not to branch-2 is because it is a change that will 
break backward compatibility. So do we 
have an expectation of backward compatibility thru the 3.0-alpha series (I 
personally vote No, since 3.0 is experimental 
at this stage), but if we decide to support some sort of backward-compact then 
willy-nilly committing to trunk 
and still maintaining the expectation we can release Alphas from 3.0 does not 
look possible.

And then comes the question, once 3.0 becomes official, where do we check-in a 
change,  if that would break something? 
so this will lead us back to trunk being the unstable – 3.0 being the new 
“branch-2”.

One more point: If we are moving to use a branch always – then we are looking 
at a model similar to using a git + pull 
request model. If that is so would it make sense to modify the rules to make 
these branches easier to merge?
Say for example, if all commits in a branch has followed review and checking 
policy – just like trunk and commits 
have been made only after a sign off from a committer, would it be possible to 
merge with a 3-day voting period 
instead of 7, or treat it just like today’s commit to trunk – but with 2 people 
signing-off? 

What I am suggesting is reducing the administrative overheads of using a branch 
to encourage use of branching.  
Right now it feels like Apache’s process encourages committing directly to 
trunk than a branch

Thanks
Anu


On 6/10/16, 10:50 AM, "sjl...@gmail.com on behalf of Sangjin Lee" 
 wrote:

>Having worked on a major feature in a feature branch, I have some thoughts
>and observations on feature branch development.
>
>IMO feature branch development v. direct commits to trunk in piecemeal is
>really a choice of *granularity*. Do we want a series of fine-grained state
>changes on trunk or fewer coarse-grained chunks of commits on trunk?
>
>This makes me favor a branch-based development model for any "decent-sized"
>features (we'll need to define "decent-sized" of course). Once you have
>coarse-grained changes, it's easier to reason about what made what release
>and in what state. As importantly, it makes it easier to back out a
>complete feature fairly easily if that becomes necessary. My totally
>unscientific suggestion may be if a feature takes more than dozen commits
>and longer than a month, we should probably have a bias towards a feature
>branch.
>
>Branch-based development also makes you go faster if your feature is
>larger. I wouldn't do it the other way for timeline service v.2 for example.
>
>That said, feature branches don't come for free. Now the onus is on the
>feature developer to constantly rebase with the trunk to keep it reasonably
>integrated with the trunk. More logistics is involved for the feature
>developer. Another big question is, when a feature branch gets big and it's
>time to merge, would it get as scrutinized as a series of individual
>commits? Since the size of merge can be big, you kind of have to rely on
>those feature committers and those who help them.
>
>In terms of integrating/stabilizing, I don't think branch development
>necessarily makes it harder. It is again granularity. In case of direct
>commits on trunk, you do a lot more fine-grained integrations. In case of
>branch development, you do far fewer coarse-grained integrations via
>rebasing. If more people are doing branch-based development, it makes
>rebasing easier to manage too.
>
>Going back to the related topic of where to release (trunk v. branch-X), I
>think that is more of a proxy of the real question of "how do we maintain
>quality and stability of the trunk?". Even if we release from the trunk, if
>our bar for merging to trunk is low, the quality will not improve
>automatically. So I think we ought to tackle the quality question first.
>
>My 2 cents.
>
>
>On Fri, Jun 10, 2016 at 8:57 AM, Zhe Zhang 

Hadoop-Yarn-trunk-Java8 - Build # 1567 - Still Failing

2016-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1567/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 59642 lines...]
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  6.390 s]
[INFO] Apache Hadoop YARN API . SUCCESS [02:04 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [03:59 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.524 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 48.061 s]
[INFO] Apache Hadoop YARN NodeManager . SUCCESS [13:04 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 21.930 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:24 min]
[INFO] Apache Hadoop YARN ResourceManager . FAILURE [34:44 min]
[INFO] Apache Hadoop YARN Server Tests  SKIPPED
[INFO] Apache Hadoop YARN Client .. SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.443 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.430 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 51.377 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 59:28 min
[INFO] Finished at: 2016-06-10T18:56:18+00:00
[INFO] Final Memory: 125M/4142M
[INFO] 
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-resourcemanager: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-resourcemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRefreshNodesResourceWithFileSystemBasedConfigurationProvider

Error Message:
expected:<> but was:<>

Stack Trace:
org.junit.ComparisonFailure: expected:<> but 
was:<>
at 

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1567

2016-06-10 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-5208. Run TestAMRMClient TestNMClient TestYarnClient

[wangda] YARN-3426. Add jdiff support to YARN. (vinodkv via wangda)

[jing9] HADOOP-13249. RetryInvocationHandler need wrap InterruptedException in

--
[...truncated 59445 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 

Re: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-06-10 Thread Chris Nauroth
Interestingly, that FindBugs warning in hadoop-azure-datalake was not
flagged during pre-commit before I committed HADOOP-12666.  I'm going to
propose that we address it in scope of HADOOP-12875.

--Chris Nauroth




On 6/10/16, 10:30 AM, "Apache Jenkins Server" 
wrote:

>For more details, see
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/
>
>No changes
>
>
>
>
>-1 overall
>
>
>The following subsystems voted -1:
>findbugs unit
>
>
>The following subsystems voted -1 but
>were configured to be filtered/ignored:
>cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
>
>
>The following subsystems are considered long running:
>(runtime bigger than 1h  0m  0s)
>unit
>
>
>Specific tests:
>
>FindBugs :
>
>   module:hadoop-tools/hadoop-azure-datalake
>   int value cast to float and then passed to Math.round in
>org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem$BatchByteArrayIn
>putStream.getSplitSize(int) At PrivateAzureDataLakeFileSystem.java:and
>then passed to Math.round in
>org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem$BatchByteArrayIn
>putStream.getSplitSize(int) At PrivateAzureDataLakeFileSystem.java:[line
>925] 
>
>Failed junit tests :
>
>   hadoop.hdfs.server.namenode.TestEditLog
>   hadoop.yarn.server.resourcemanager.TestClientRMTokens
>   hadoop.yarn.server.resourcemanager.TestAMAuthorization
>   hadoop.yarn.server.TestContainerManagerSecurity
>   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization
>   hadoop.yarn.client.cli.TestLogsCLI
>   hadoop.yarn.client.api.impl.TestAMRMProxy
>   hadoop.yarn.client.api.impl.TestDistributedScheduling
>   hadoop.yarn.client.TestGetGroups
>   hadoop.mapreduce.tools.TestCLI
>   hadoop.mapred.TestMRCJCFileOutputCommitter
>
>Timed out junit tests :
>
>   org.apache.hadoop.yarn.client.cli.TestYarnCLI
>   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient
>   org.apache.hadoop.yarn.client.api.impl.TestYarnClient
>   org.apache.hadoop.yarn.client.api.impl.TestNMClient
>  
>
>   cc:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-compile-cc-root.txt  [4.0K]
>
>   javac:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-compile-javac-root.txt  [164K]
>
>   checkstyle:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-checkstyle-root.txt  [16M]
>
>   pylint:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-patch-pylint.txt  [16K]
>
>   shellcheck:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-patch-shellcheck.txt  [20K]
>
>   shelldocs:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-patch-shelldocs.txt  [16K]
>
>   whitespace:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/whitespace-eol.txt  [12M]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/whitespace-tabs.txt  [1.3M]
>
>   findbugs:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/branch-findbugs-hadoop-tools_hadoop-azure-datalake-warnings.html
>[8.0K]
>
>   javadoc:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/diff-javadoc-javadoc-root.txt  [2.3M]
>
>   unit:
>
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt  [144K]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-
>yarn-server-resourcemanager.txt  [60K]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-
>yarn-server-tests.txt  [268K]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
>[908K]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-ma
>preduce-client-core.txt  [56K]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-ma
>preduce-client-jobclient.txt  [92K]
>   
>https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact
>/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-ma
>preduce-client-nativetask.txt  [124K]
>
>Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org
>
>


-

[jira] [Resolved] (YARN-5232) Support for specifying a path for ATS plugin jars

2016-06-10 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu resolved YARN-5232.
-
Resolution: Duplicate

JIRA problem, close the duplicated issue. 

> Support for specifying a path for ATS plugin jars
> -
>
> Key: YARN-5232
> URL: https://issues.apache.org/jira/browse/YARN-5232
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Li Lu
>Assignee: Li Lu
>
> Third-party plugins need to add their jars to ATS. Most of the times, 
> isolation is not needed. However, there needs to be a way to specify the 
> path. For now, the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5234) ResourceManager REST API missing descriptions for what's returned when using Fair Scheduler

2016-06-10 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-5234:


 Summary: ResourceManager REST API missing descriptions for what's 
returned when using Fair Scheduler
 Key: YARN-5234
 URL: https://issues.apache.org/jira/browse/YARN-5234
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Reporter: Grant Sohn
Priority: Minor


Cluster Scheduler API indicates support for Capacity and Fifo.  What's missing 
is what would be returned if using Fair scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5232) Support for specifying a path for ATS plugin jars

2016-06-10 Thread Li Lu (JIRA)
Li Lu created YARN-5232:
---

 Summary: Support for specifying a path for ATS plugin jars
 Key: YARN-5232
 URL: https://issues.apache.org/jira/browse/YARN-5232
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Li Lu
Assignee: Li Lu


Third-party plugins need to add their jars to ATS. Most of the times, isolation 
is not needed. However, there needs to be a way to specify the path. For now, 
the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5233) Support for specifying a path for ATS plugin jars

2016-06-10 Thread Li Lu (JIRA)
Li Lu created YARN-5233:
---

 Summary: Support for specifying a path for ATS plugin jars
 Key: YARN-5233
 URL: https://issues.apache.org/jira/browse/YARN-5233
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.8.0
Reporter: Li Lu
Assignee: Li Lu


Third-party plugins need to add their jars to ATS. Most of the times, isolation 
is not needed. However, there needs to be a way to specify the path. For now, 
the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5231) obtaining yarn logs for last 'n' bytes using CLI gives 'java.io.IOException'

2016-06-10 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-5231:


 Summary: obtaining yarn logs for last 'n' bytes using CLI gives 
'java.io.IOException'
 Key: YARN-5231
 URL: https://issues.apache.org/jira/browse/YARN-5231
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Sumana Sathish
Assignee: Xuan Gong
Priority: Blocker


Obtaining logs for last 'n' bytes gives the following exception
{code}
yarn logs -applicationId application_1465421211793_0004 -containerId 
container_e07_1465421211793_0004_01_01 -logFiles syslog -size -1000
Exception in thread "main" java.io.IOException: The bytes were skipped are 
different from the caller requested
at 
org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.readContainerLogsForALogType(AggregatedLogFormat.java:838)
at 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainerLogsForALogType(LogCLIHelpers.java:300)
at 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainersLogsForALogTypeWithoutNodeId(LogCLIHelpers.java:224)
at 
org.apache.hadoop.yarn.client.cli.LogsCLI.printContainerLogsForFinishedApplicationWithoutNodeId(LogsCLI.java:447)
at 
org.apache.hadoop.yarn.client.cli.LogsCLI.fetchContainerLogs(LogsCLI.java:782)
at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:228)
at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:264)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5230) allowPreemptionFrom flag not mentioned in Hadoop: Fair Scheduler documents

2016-06-10 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-5230:


 Summary: allowPreemptionFrom flag not mentioned in Hadoop: Fair 
Scheduler documents
 Key: YARN-5230
 URL: https://issues.apache.org/jira/browse/YARN-5230
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.9.0
Reporter: Grant Sohn
Priority: Minor


Feature added in https://issues.apache.org/jira/browse/YARN-4462 is not 
documented in the Hadoop: Fair Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[Review Request] HADOOP-12687 | Blocker for hadoop2.8 release

2016-06-10 Thread Rohith Sharma K S
Hi Folks

Could anyone review HADOOP-12687 ? Basically patch is going to break RFC
1535.
Does Hadoop meet mandatory RFC standards?  It would be greatly appreciated
if folks express your opinion and helping in for getting consensus.

Thanks & Regards
Rohith Sharma K S


Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Sangjin Lee
Having worked on a major feature in a feature branch, I have some thoughts
and observations on feature branch development.

IMO feature branch development v. direct commits to trunk in piecemeal is
really a choice of *granularity*. Do we want a series of fine-grained state
changes on trunk or fewer coarse-grained chunks of commits on trunk?

This makes me favor a branch-based development model for any "decent-sized"
features (we'll need to define "decent-sized" of course). Once you have
coarse-grained changes, it's easier to reason about what made what release
and in what state. As importantly, it makes it easier to back out a
complete feature fairly easily if that becomes necessary. My totally
unscientific suggestion may be if a feature takes more than dozen commits
and longer than a month, we should probably have a bias towards a feature
branch.

Branch-based development also makes you go faster if your feature is
larger. I wouldn't do it the other way for timeline service v.2 for example.

That said, feature branches don't come for free. Now the onus is on the
feature developer to constantly rebase with the trunk to keep it reasonably
integrated with the trunk. More logistics is involved for the feature
developer. Another big question is, when a feature branch gets big and it's
time to merge, would it get as scrutinized as a series of individual
commits? Since the size of merge can be big, you kind of have to rely on
those feature committers and those who help them.

In terms of integrating/stabilizing, I don't think branch development
necessarily makes it harder. It is again granularity. In case of direct
commits on trunk, you do a lot more fine-grained integrations. In case of
branch development, you do far fewer coarse-grained integrations via
rebasing. If more people are doing branch-based development, it makes
rebasing easier to manage too.

Going back to the related topic of where to release (trunk v. branch-X), I
think that is more of a proxy of the real question of "how do we maintain
quality and stability of the trunk?". Even if we release from the trunk, if
our bar for merging to trunk is low, the quality will not improve
automatically. So I think we ought to tackle the quality question first.

My 2 cents.


On Fri, Jun 10, 2016 at 8:57 AM, Zhe Zhang  wrote:

> Thanks for the notes Andrew, Junping, Karthik.
>
> Here are some of my understandings:
>
> - Trunk is the "latest and greatest" of Hadoop. If a user starts using
> Hadoop today, without legacy workloads, trunk is what he/she should use.
> - Therefore, each commit to trunk should be transactional -- atomic,
> consistent, isolated (from other uncommitted patches); I'm not so sure
> about durability, Hadoop might be gone in 50 years :). As a committer, I
> should be able to look at a patch and determine whether it's a
> self-contained improvement of trunk, without looking at other uncommitted
> patches.
> - Some comments inline:
>
> On Fri, Jun 10, 2016 at 6:56 AM Junping Du  wrote:
>
> > Comparing with advantages, I believe the disadvantages of shipping any
> > releases directly from trunk are more obvious and significant:
> > - A lot of commits (incompatible, risky, uncompleted feature, etc.) have
> > to wait to commit to trunk or put into a separated branch that could
> delay
> > feature development progress as additional vote process get involved even
> > the feature is simple and harmless.
> >
> Thanks Junping, those are valid concerns. I think we should clearly
> separate incompatible with  uncompleted / half-done work in this
> discussion. Whether people should commit incompatible changes to trunk is a
> much more tricky question (related to trunk-incompat etc.). But per my
> comment above, IMHO, *not committing uncompleted work to trunk* should be a
> much easier principle to agree upon.
>
>
> > - For small feature with only 1 or 2 commits, that need three +1 from
> PMCs
> > will increase the bar largely for contributors who just start to
> contribute
> > on Hadoop features but no such sufficient support.
> >
> Development overhead is another valid concern. I think our rule-of-thumb
> should be that, small-medium new features should be proposed as a single
> JIRA/patch (as we recently did for HADOOP-12666). If the complexity goes
> beyond a single JIRA/patch, use a feature branch.
>
>
> >
> > Given these concerns, I am open to other options, like: proposed by Vinod
> > or Chris, but rather than to release anything directly from trunk.
> >
> > - This point doesn't necessarily need to be resolved now though, since
> > again we're still doing alphas.
> > No. I think we have to settle down this first. Without a common agreed
> and
> > transparent release process and branches in community, any release
> (alpha,
> > beta) bits is only called a private release but not a official apache
> > hadoop release (even alpha).
> >
> >
> > Thanks,
> >
> > Junping
> > 
> 

Hadoop-Yarn-trunk-Java8 - Build # 1566 - Still Failing

2016-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1566/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 31339 lines...]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-resourcemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
ERROR: Could not install LATEST1_8_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install MAVEN_3_3_3_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testQueueMetricsOnRMRestart

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at 

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1566

2016-06-10 Thread Apache Jenkins Server
See 

Changes:

[aw] HDFS-7987. Allow files / directories to be moved (Ravi Prakash via aw)

--
[...truncated 31142 lines...]
[WARNING] ^
[WARNING] 
:128:
 warning: no description for @throws
[WARNING] * @throws PathNotFoundException
[WARNING] ^
[WARNING] 
:129:
 warning: no description for @throws
[WARNING] * @throws InvalidPathnameException
[WARNING] ^
[WARNING] 
:130:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:136:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:158:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:295:
 warning: no @throws for java.io.IOException
[WARNING] protected String createFullPath(String path) throws IOException {
[WARNING] ^
[WARNING] 
:370:
 warning: no @param for acls
[WARNING] protected IOException operationFailure(String path,
[WARNING] ^
[WARNING] 
:424:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:427:
 warning: no @param for mode
[WARNING] public boolean maybeCreate(String path,
[WARNING] ^
[WARNING] 
:463:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:509:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:511:
 warning: no @return
[WARNING] public String zkPathMustExist(String path) throws IOException {
[WARNING] ^
[WARNING] 
:524:
 warning: no @return
[WARNING] public boolean zkMkPath(String path,
[WARNING] ^
[WARNING] 
:580:
 warning: no description for @param
[WARNING] * @param acls
[WARNING] ^
[WARNING] 
:581:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:583:
 warning: no @param for mode
[WARNING] 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-06-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/

No changes




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-tools/hadoop-azure-datalake 
   int value cast to float and then passed to Math.round in 
org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem$BatchByteArrayInputStream.getSplitSize(int)
 At PrivateAzureDataLakeFileSystem.java:and then passed to Math.round in 
org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem$BatchByteArrayInputStream.getSplitSize(int)
 At PrivateAzureDataLakeFileSystem.java:[line 925] 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestEditLog 
   hadoop.yarn.server.resourcemanager.TestClientRMTokens 
   hadoop.yarn.server.resourcemanager.TestAMAuthorization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestDistributedScheduling 
   hadoop.yarn.client.TestGetGroups 
   hadoop.mapreduce.tools.TestCLI 
   hadoop.mapred.TestMRCJCFileOutputCommitter 

Timed out junit tests :

   org.apache.hadoop.yarn.client.cli.TestYarnCLI 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-compile-javac-root.txt
  [164K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/branch-findbugs-hadoop-tools_hadoop-azure-datalake-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [908K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [92K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/58/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Zhe Zhang
Thanks for the notes Andrew, Junping, Karthik.

Here are some of my understandings:

- Trunk is the "latest and greatest" of Hadoop. If a user starts using
Hadoop today, without legacy workloads, trunk is what he/she should use.
- Therefore, each commit to trunk should be transactional -- atomic,
consistent, isolated (from other uncommitted patches); I'm not so sure
about durability, Hadoop might be gone in 50 years :). As a committer, I
should be able to look at a patch and determine whether it's a
self-contained improvement of trunk, without looking at other uncommitted
patches.
- Some comments inline:

On Fri, Jun 10, 2016 at 6:56 AM Junping Du  wrote:

> Comparing with advantages, I believe the disadvantages of shipping any
> releases directly from trunk are more obvious and significant:
> - A lot of commits (incompatible, risky, uncompleted feature, etc.) have
> to wait to commit to trunk or put into a separated branch that could delay
> feature development progress as additional vote process get involved even
> the feature is simple and harmless.
>
Thanks Junping, those are valid concerns. I think we should clearly
separate incompatible with  uncompleted / half-done work in this
discussion. Whether people should commit incompatible changes to trunk is a
much more tricky question (related to trunk-incompat etc.). But per my
comment above, IMHO, *not committing uncompleted work to trunk* should be a
much easier principle to agree upon.


> - For small feature with only 1 or 2 commits, that need three +1 from PMCs
> will increase the bar largely for contributors who just start to contribute
> on Hadoop features but no such sufficient support.
>
Development overhead is another valid concern. I think our rule-of-thumb
should be that, small-medium new features should be proposed as a single
JIRA/patch (as we recently did for HADOOP-12666). If the complexity goes
beyond a single JIRA/patch, use a feature branch.


>
> Given these concerns, I am open to other options, like: proposed by Vinod
> or Chris, but rather than to release anything directly from trunk.
>
> - This point doesn't necessarily need to be resolved now though, since
> again we're still doing alphas.
> No. I think we have to settle down this first. Without a common agreed and
> transparent release process and branches in community, any release (alpha,
> beta) bits is only called a private release but not a official apache
> hadoop release (even alpha).
>
>
> Thanks,
>
> Junping
> 
> From: Karthik Kambatla 
> Sent: Friday, June 10, 2016 7:49 AM
> To: Andrew Wang
> Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
> Subject: Re: [DISCUSS] Increased use of feature branches
>
> Thanks for restarting this thread Andrew. I really hope we can get this
> across to a VOTE so it is clear.
>
> I see a few advantages shipping from trunk:
>
>- The lack of need for one additional backport each time.
>- Feature rot in trunk
>
> Instead of creating branch-3, I recommend creating branch-3.x so we can
> continue doing 3.x releases off branch-3 even after we move trunk to 4.x (I
> said it :))
>
> On Thu, Jun 9, 2016 at 11:12 PM, Andrew Wang 
> wrote:
>
> > Hi all,
> >
> > On a separate thread, a question was raised about 3.x branching and use
> of
> > feature branches going forward.
> >
> > We discussed this previously on the "Looking to a Hadoop 3 release"
> thread
> > that has spanned the years, with Vinod making this proposal (building on
> > ideas from others who also commented in the email thread):
> >
> >
> >
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201604.mbox/browser
> >
> > Pasting here for ease:
> >
> > On an unrelated note, offline I was pitching to a bunch of
> > contributors another idea to deal
> > with rotting trunk post 3.x: *Make 3.x releases off of trunk directly*.
> >
> > What this gains us is that
> >  - Trunk is always nearly stable or nearly ready for releases
> >  - We no longer have some code lying around in some branch (today’s
> > trunk) that is not releasable
> > because it gets mixed with other undesirable and incompatible changes.
> >  - This needs to be coupled with more discipline on individual
> > features - medium to to large
> > features are always worked upon in branches and get merged into trunk
> > (and a nearing release!)
> > when they are ready
> >  - All incompatible changes go into some sort of a trunk-incompat
> > branch and stay there till
> > we accumulate enough of those to warrant another major release.
> >
> > Regarding "trunk-incompat", since we're still in the alpha stage for
> 3.0.0,
> > there's no need for this branch yet. This aspect of Vinod's proposal was
> > still under a bit of discussion; Chris Douglas though we should cut a
> > branch-3 for the first 3.0.0 beta, which aligns with my original
> 

Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Karthik Kambatla
Inline.

On Fri, Jun 10, 2016 at 6:56 AM, Junping Du  wrote:

> Comparing with advantages, I believe the disadvantages of shipping any
> releases directly from trunk are more obvious and significant:
> - A lot of commits (incompatible, risky, uncompleted feature, etc.) have
> to wait to commit to trunk or put into a separated branch that could delay
> feature development progress as additional vote process get involved even
> the feature is simple and harmless.
>

Including these sorts of commits in trunk is a major pain.

One example from a recent mistake I made:
YARN-2877 and YARN-1011 had some common changes. Instead of putting them in
a separate branch, I committed these common changes to trunk because well
we don't release from trunk and what can go wrong. After a few days, other
contributors and committers started feeling annoyed about having to submit
two different patches for trunk and branch-2. This inconvenience led to
those patches being pulled into branch-2 even though they were not ready
for inclusion in branch-2 or a 2.x release.

I feel the major friction for feature branches comes from only some
features using it. If everyone uses feature branches and we have better
processes around quantifying the stability of a feature branch, feature
branches should make for a smoother experience for everyone.

It is not uncommon for features to get merged into trunk before being ready
with promises of follow-up work. While that might very well be the intent
of contributors, other work items come up and things get sidelined. How
often have we seen features without HA and security.


>
> - These commits left in separated branches are isolated and get more
> chance to conflict each other, and more bugs could be involved due to
> conflicts and/or less eyes watching/bless on isolated branches.
>

Partially agree. There is a tradeoff here: if we keep putting them into
trunk, they (1) destabilize trunk, and (2) conflict with other bug fixes
and smaller improvements.


>
> - More unnecessary arguments/debates will happen on if some commits should
> land on trunk or a separated branch, just like what we have recently.
>

Again, clearly defining the requirements to be merged into trunk will make
this easier. How is this different from what we do today for branch-2? If
we still have debates, that is probably required? Not having them today is
actually a concern?


>
> - Because branches will get increased massively, more community efforts
> will be spent on review & vote for branches merge that means less effort
> will be spent on other commits review given our review bandwidth is quite
> short so far.
>

Yes and no. Strictly using feature branches will serialize features.
Integrating with other features is a one-time, albeit more involved,
process instead of multiple rebases/resolutions each somewhat involved.


>
> - For small feature with only 1 or 2 commits, that need three +1 from PMCs
> will increase the bar largely for contributors who just start to contribute
> on Hadoop features but no such sufficient support.
>

If a feature/improvement is not supported by 3 committers (not PMC
members), it is probably worth looking at why. May be, this feature should
not be included at all?

I am open to changing the requirements for a merge. What do you think of
one +1 (thorough review) and two +0s (high-level review).

If the concern is finding enough committers, I would like for the PMC to
consider voting in more committers and increasing bandwidth.


>
> Given these concerns, I am open to other options, like: proposed by Vinod
> or Chris, but rather than to release anything directly from trunk.
>

I actually thought this was Vinod's proposal. My understanding is Andrew is
resurfacing this so we finalize things.


>
> - This point doesn't necessarily need to be resolved now though, since
> again we're still doing alphas.
> No. I think we have to settle down this first. Without a common agreed and
> transparent release process and branches in community, any release (alpha,
> beta) bits is only called a private release but not a official apache
> hadoop release (even alpha).
>
>
I am absolutely with Junping here. Changing this process primarily requires
a change in our mental model. I think it is pretty important that we decide
on one approach preferably before doing an alpha release.

To clarify: our current approach (trunk and branch-2) has been working
okay. The only issue I see is in the way we take merging into trunk
lightly. If we have well-defined requirements for merging to trunk and take
those seriously, I am comfortable with using the approach for 3.x. The new
proposal forces following these requirements and hence I like it more.


>
> Thanks,
>
> Junping
> 
> From: Karthik Kambatla 
> Sent: Friday, June 10, 2016 7:49 AM
> To: Andrew Wang
> Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> 

Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Junping Du
Comparing with advantages, I believe the disadvantages of shipping any releases 
directly from trunk are more obvious and significant:
- A lot of commits (incompatible, risky, uncompleted feature, etc.) have to 
wait to commit to trunk or put into a separated branch that could delay feature 
development progress as additional vote process get involved even the feature 
is simple and harmless.

- These commits left in separated branches are isolated and get more chance to 
conflict each other, and more bugs could be involved due to conflicts and/or 
less eyes watching/bless on isolated branches.

- More unnecessary arguments/debates will happen on if some commits should land 
on trunk or a separated branch, just like what we have recently.

- Because branches will get increased massively, more community efforts will be 
spent on review & vote for branches merge that means less effort will be spent 
on other commits review given our review bandwidth is quite short so far.

- For small feature with only 1 or 2 commits, that need three +1 from PMCs will 
increase the bar largely for contributors who just start to contribute on 
Hadoop features but no such sufficient support.

Given these concerns, I am open to other options, like: proposed by Vinod or 
Chris, but rather than to release anything directly from trunk.

- This point doesn't necessarily need to be resolved now though, since again 
we're still doing alphas.
No. I think we have to settle down this first. Without a common agreed and 
transparent release process and branches in community, any release (alpha, 
beta) bits is only called a private release but not a official apache hadoop 
release (even alpha).


Thanks,

Junping

From: Karthik Kambatla 
Sent: Friday, June 10, 2016 7:49 AM
To: Andrew Wang
Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
Subject: Re: [DISCUSS] Increased use of feature branches

Thanks for restarting this thread Andrew. I really hope we can get this
across to a VOTE so it is clear.

I see a few advantages shipping from trunk:

   - The lack of need for one additional backport each time.
   - Feature rot in trunk

Instead of creating branch-3, I recommend creating branch-3.x so we can
continue doing 3.x releases off branch-3 even after we move trunk to 4.x (I
said it :))

On Thu, Jun 9, 2016 at 11:12 PM, Andrew Wang 
wrote:

> Hi all,
>
> On a separate thread, a question was raised about 3.x branching and use of
> feature branches going forward.
>
> We discussed this previously on the "Looking to a Hadoop 3 release" thread
> that has spanned the years, with Vinod making this proposal (building on
> ideas from others who also commented in the email thread):
>
>
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201604.mbox/browser
>
> Pasting here for ease:
>
> On an unrelated note, offline I was pitching to a bunch of
> contributors another idea to deal
> with rotting trunk post 3.x: *Make 3.x releases off of trunk directly*.
>
> What this gains us is that
>  - Trunk is always nearly stable or nearly ready for releases
>  - We no longer have some code lying around in some branch (today’s
> trunk) that is not releasable
> because it gets mixed with other undesirable and incompatible changes.
>  - This needs to be coupled with more discipline on individual
> features - medium to to large
> features are always worked upon in branches and get merged into trunk
> (and a nearing release!)
> when they are ready
>  - All incompatible changes go into some sort of a trunk-incompat
> branch and stay there till
> we accumulate enough of those to warrant another major release.
>
> Regarding "trunk-incompat", since we're still in the alpha stage for 3.0.0,
> there's no need for this branch yet. This aspect of Vinod's proposal was
> still under a bit of discussion; Chris Douglas though we should cut a
> branch-3 for the first 3.0.0 beta, which aligns with my original thinking.
> This point doesn't necessarily need to be resolved now though, since again
> we're still doing alphas.
>
> What we should get consensus on is the goal of keeping trunk stable, and
> achieving that by doing more development on feature branches and being
> judicious about merges. My sense from the Hadoop 3 email thread (and the
> more recent one on the async API) is that people are generally in favor of
> this.
>
> We're just about ready to do the first 3.0.0 alpha, so would greatly
> appreciate everyone's timely response in this matter.
>
> Thanks,
> Andrew
>

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Hadoop-Yarn-trunk-Java8 - Build # 1565 - Still Failing

2016-06-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1565/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 34210 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  5.393 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:28 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [03:00 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  1.057 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 35.972 s]
[INFO] Apache Hadoop YARN NodeManager . FAILURE [12:08 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 20.276 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:16 min]
[INFO] Apache Hadoop YARN ResourceManager . SUCCESS [37:06 min]
[INFO] Apache Hadoop YARN Server Tests  SKIPPED
[INFO] Apache Hadoop YARN Client .. SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.408 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.409 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 54.573 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 59:00 min
[INFO] Finished at: 2016-06-10T11:36:36+00:00
[INFO] Final Memory: 136M/4130M
[INFO] 
[WARNING] The requested profile "docs" could not be activated because it does 
not exist.
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-nodemanager: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-nodemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager.testKillMultipleOpportunisticContainers

Error Message:
ContainerState is not correct (timedout) expected: but 
was:

Stack Trace:
java.lang.AssertionError: ContainerState is not correct (timedout) 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at 

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1565

2016-06-10 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] MAPREDUCE-6741. Refactor UncompressedSplitLineReader.fillBuffer().

--
[...truncated 34013 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:191:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-06-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/

[Jun 9, 2016 4:28:49 PM] (stevel) HADOOP-13237: s3a initialization against 
public bucket fails if caller
[Jun 9, 2016 7:30:58 PM] (vinodkv) YARN-5191. Renamed the newly added 
“download=true” option for getting
[Jun 9, 2016 8:00:47 PM] (stevel) HADOOP-12537 S3A to support Amazon STS 
temporary credentials.
[Jun 9, 2016 8:49:52 PM] (wang) HADOOP-13175. Remove hadoop-ant from 
hadoop-tools. Contributed by Chris
[Jun 9, 2016 8:54:14 PM] (wang) HADOOP-12893. Verify LICENSE.txt and 
NOTICE.txt. Contributed by Xiao
[Jun 9, 2016 9:33:31 PM] (cnauroth) HADOOP-12666. Support Microsoft Azure Data 
Lake - as a file system in




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-tools/hadoop-azure-datalake 
   int value cast to float and then passed to Math.round in 
org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem$BatchByteArrayInputStream.getSplitSize(int)
 At PrivateAzureDataLakeFileSystem.java:and then passed to Math.round in 
org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem$BatchByteArrayInputStream.getSplitSize(int)
 At PrivateAzureDataLakeFileSystem.java:[line 925] 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.yarn.server.resourcemanager.TestClientRMTokens 
   hadoop.yarn.server.resourcemanager.TestAMAuthorization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.TestGetGroups 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestDistributedScheduling 
   hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt 

Timed out junit tests :

   org.apache.hadoop.yarn.client.cli.TestYarnCLI 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-compile-javac-root.txt
  [164K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/branch-findbugs-hadoop-tools_hadoop-azure-datalake-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [912K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/57/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org




Re: [DISCUSS] Increased use of feature branches

2016-06-10 Thread Karthik Kambatla
Thanks for restarting this thread Andrew. I really hope we can get this
across to a VOTE so it is clear.

I see a few advantages shipping from trunk:

   - The lack of need for one additional backport each time.
   - Feature rot in trunk

Instead of creating branch-3, I recommend creating branch-3.x so we can
continue doing 3.x releases off branch-3 even after we move trunk to 4.x (I
said it :))

On Thu, Jun 9, 2016 at 11:12 PM, Andrew Wang 
wrote:

> Hi all,
>
> On a separate thread, a question was raised about 3.x branching and use of
> feature branches going forward.
>
> We discussed this previously on the "Looking to a Hadoop 3 release" thread
> that has spanned the years, with Vinod making this proposal (building on
> ideas from others who also commented in the email thread):
>
>
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201604.mbox/browser
>
> Pasting here for ease:
>
> On an unrelated note, offline I was pitching to a bunch of
> contributors another idea to deal
> with rotting trunk post 3.x: *Make 3.x releases off of trunk directly*.
>
> What this gains us is that
>  - Trunk is always nearly stable or nearly ready for releases
>  - We no longer have some code lying around in some branch (today’s
> trunk) that is not releasable
> because it gets mixed with other undesirable and incompatible changes.
>  - This needs to be coupled with more discipline on individual
> features - medium to to large
> features are always worked upon in branches and get merged into trunk
> (and a nearing release!)
> when they are ready
>  - All incompatible changes go into some sort of a trunk-incompat
> branch and stay there till
> we accumulate enough of those to warrant another major release.
>
> Regarding "trunk-incompat", since we're still in the alpha stage for 3.0.0,
> there's no need for this branch yet. This aspect of Vinod's proposal was
> still under a bit of discussion; Chris Douglas though we should cut a
> branch-3 for the first 3.0.0 beta, which aligns with my original thinking.
> This point doesn't necessarily need to be resolved now though, since again
> we're still doing alphas.
>
> What we should get consensus on is the goal of keeping trunk stable, and
> achieving that by doing more development on feature branches and being
> judicious about merges. My sense from the Hadoop 3 email thread (and the
> more recent one on the async API) is that people are generally in favor of
> this.
>
> We're just about ready to do the first 3.0.0 alpha, so would greatly
> appreciate everyone's timely response in this matter.
>
> Thanks,
> Andrew
>


[DISCUSS] Increased use of feature branches

2016-06-10 Thread Andrew Wang
Hi all,

On a separate thread, a question was raised about 3.x branching and use of
feature branches going forward.

We discussed this previously on the "Looking to a Hadoop 3 release" thread
that has spanned the years, with Vinod making this proposal (building on
ideas from others who also commented in the email thread):

http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201604.mbox/browser

Pasting here for ease:

On an unrelated note, offline I was pitching to a bunch of
contributors another idea to deal
with rotting trunk post 3.x: *Make 3.x releases off of trunk directly*.

What this gains us is that
 - Trunk is always nearly stable or nearly ready for releases
 - We no longer have some code lying around in some branch (today’s
trunk) that is not releasable
because it gets mixed with other undesirable and incompatible changes.
 - This needs to be coupled with more discipline on individual
features - medium to to large
features are always worked upon in branches and get merged into trunk
(and a nearing release!)
when they are ready
 - All incompatible changes go into some sort of a trunk-incompat
branch and stay there till
we accumulate enough of those to warrant another major release.

Regarding "trunk-incompat", since we're still in the alpha stage for 3.0.0,
there's no need for this branch yet. This aspect of Vinod's proposal was
still under a bit of discussion; Chris Douglas though we should cut a
branch-3 for the first 3.0.0 beta, which aligns with my original thinking.
This point doesn't necessarily need to be resolved now though, since again
we're still doing alphas.

What we should get consensus on is the goal of keeping trunk stable, and
achieving that by doing more development on feature branches and being
judicious about merges. My sense from the Hadoop 3 email thread (and the
more recent one on the async API) is that people are generally in favor of
this.

We're just about ready to do the first 3.0.0 alpha, so would greatly
appreciate everyone's timely response in this matter.

Thanks,
Andrew