Cannot create release artifacts for branch-2.8

2016-06-06 Thread Wangda Tan
Hi Hadoop Devs,

As you know, we're pushing 2.8.0 releases recently, there're couple of
issues that block creating release artifacts from source code.

I tried following approaches:
1) Run build through Hadoop Jenkins Job:
https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/
2) Run dev-support/create-release.sh

There're at most two issues cause the problem from what I can see:
1) https://issues.apache.org/jira/browse/HADOOP-12022 removed
releasenotes.html
2) https://issues.apache.org/jira/browse/HADOOP-11792 removed all
CHANGES.txt

I have tried to revert HADOOP-12022/HADOOP-11792 locally in branch-2.8,
create-releases.sh can run through and generate docs/artifacts correctly
(at least layout looks correctly, I haven't verified generated bits).

To make sure releases are not blocked, we have a couple of options:
a. Fix HADOOP-12892 and related issues, which requires to backport a couple
of commits which marked to be incompatible.
b. Revert both of the commits, and manual fix CHANGES.txt.

Any helps/suggestions are welcome.

Thanks,
Wangda


Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1545

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[Arun Suresh] YARN-5185. StageAllocaterGreedyRLE: Fix NPE in corner case. (Carlo

[Arun Suresh] YARN-4525. Fix bug in 
RLESparseResourceAllocation.getRangeOverlapping().

--
[...truncated 34177 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:191:
 warning: no description for @throws

Hadoop-Yarn-trunk-Java8 - Build # 1545 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1545/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 34374 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  3.508 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:27 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [02:59 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.072 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 36.210 s]
[INFO] Apache Hadoop YARN NodeManager . SUCCESS [11:13 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 19.210 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:11 min]
[INFO] Apache Hadoop YARN ResourceManager . SUCCESS [36:08 min]
[INFO] Apache Hadoop YARN Server Tests  SUCCESS [02:07 min]
[INFO] Apache Hadoop YARN Client .. FAILURE [07:11 min]
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.038 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.056 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 50.125 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:06 h
[INFO] Finished at: 2016-06-07T05:43:44+00:00
[INFO] Final Memory: 148M/4769M
[INFO] 
[WARNING] The requested profile "docs" could not be activated because it does 
not exist.
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-client: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-client
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogs

Error Message:
expected:<[Hello]> but was:<[=]>

Stack Trace:
org.junit.ComparisonFailure: expected:<[Hello]> but was:<[=]>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 

[jira] [Created] (YARN-5205) yarn logs for live applications does not provide log files which may have already been aggregated

2016-06-06 Thread Siddharth Seth (JIRA)
Siddharth Seth created YARN-5205:


 Summary: yarn logs for live applications does not provide log 
files which may have already been aggregated
 Key: YARN-5205
 URL: https://issues.apache.org/jira/browse/YARN-5205
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.9.0
Reporter: Siddharth Seth


With periodic aggregation enabled, the logs which have been partially 
aggregated are not always displayed by the yarn logs command.

If the file exists in the log dir for a container - all previously aggregated 
files with the same name, along with the current file will be part of the yarn 
log output.
Files which have been previously aggregated, for which a file with the same 
name does not exists in the container log dir do not show up in the output.

After the app completes, all logs are available.

cc [~xgong]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Hadoop-Yarn-trunk-Java8 - Build # 1544 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1544/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 29553 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  3.595 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:32 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [02:56 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.071 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 34.441 s]
[INFO] Apache Hadoop YARN NodeManager . SUCCESS [11:03 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 18.948 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [02:42 min]
[INFO] Apache Hadoop YARN ResourceManager . FAILURE [31:23 min]
[INFO] Apache Hadoop YARN Server Tests  SKIPPED
[INFO] Apache Hadoop YARN Client .. SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.041 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.047 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 41.624 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 51:17 min
[INFO] Finished at: 2016-06-07T00:29:03+00:00
[INFO] Final Memory: 114M/4683M
[INFO] 
[WARNING] The requested profile "docs" could not be activated because it does 
not exist.
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-resourcemanager: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-resourcemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testServiceAclsRefreshWithLocalConfigurationProvider

Error Message:
Using localConfigurationProvider. Should not get any exception.

Stack Trace:
java.lang.AssertionError: Using localConfigurationProvider. Should not get any 
exception.
at org.junit.Assert.fail(Assert.java:88)

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1544

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[zhz] HDFS-10458. getFileEncryptionInfo should return quickly for

--
[...truncated 29356 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:191:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 

[jira] [Created] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-06 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-5204:


 Summary: Properly report status of killed/stopped queued containers
 Key: YARN-5204
 URL: https://issues.apache.org/jira/browse/YARN-5204
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Konstantinos Karanasos


When a queued container gets killed or stopped, we need to report its status in 
the {{getContainerStatusInternal}} method of the 
{{QueuingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-06-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5203:


 Summary: Return ResourceRequest JAXB object in ResourceManager 
Cluster Applications REST API
 Key: YARN-5203
 URL: https://issues.apache.org/jira/browse/YARN-5203
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Subru Krishnan


The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} 
as String rather than a JAXB object. This prevents downstream tools like 
Federation Router (YARN-3659) that depend on the REST API to unmarshall the 
{{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version 
of the {{ResourceRequest}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1543

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12807 S3AFileSystem should read AWS credentials from environment

--
[...truncated 29359 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:191:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 

Hadoop-Yarn-trunk-Java8 - Build # 1543 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1543/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 29556 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  2.839 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:25 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [02:53 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.045 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 34.170 s]
[INFO] Apache Hadoop YARN NodeManager . SUCCESS [11:10 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 18.777 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:17 min]
[INFO] Apache Hadoop YARN ResourceManager . FAILURE [35:12 min]
[INFO] Apache Hadoop YARN Server Tests  SKIPPED
[INFO] Apache Hadoop YARN Client .. SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.037 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.057 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 50.053 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 55:47 min
[INFO] Finished at: 2016-06-06T23:33:09+00:00
[INFO] Final Memory: 117M/4374M
[INFO] 
[WARNING] The requested profile "docs" could not be activated because it does 
not exist.
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-resourcemanager: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-resourcemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRefreshNodesResourceWithResourceReturnInRegistration

Error Message:
expected:<> but was:<>

Stack Trace:
org.junit.ComparisonFailure: expected:<> but 
was:<>
at 

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1542

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[mingma] MAPREDUCE-5044. Have AM trigger jstack on task attempts that timeout

--
[...truncated 29361 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 
:174:
 warning: no description for @throws
[WARNING] * @throws IOException
[WARNING] ^
[WARNING] 
:191:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING] 

Hadoop-Yarn-trunk-Java8 - Build # 1542 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1542/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 29558 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  2.771 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:29 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [02:54 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.052 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 33.757 s]
[INFO] Apache Hadoop YARN NodeManager . SUCCESS [11:08 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 18.538 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:12 min]
[INFO] Apache Hadoop YARN ResourceManager . FAILURE [35:19 min]
[INFO] Apache Hadoop YARN Server Tests  SKIPPED
[INFO] Apache Hadoop YARN Client .. SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.040 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.043 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 48.628 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 55:50 min
[INFO] Finished at: 2016-06-06T22:33:05+00:00
[INFO] Final Memory: 123M/4635M
[INFO] 
[WARNING] The requested profile "docs" could not be activated because it does 
not exist.
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-resourcemanager: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-resourcemanager
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testQueueMetricsOnRMRestart

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
  

[jira] [Created] (YARN-5202) Dynamic Overcommit of Node Resources - POC

2016-06-06 Thread Nathan Roberts (JIRA)
Nathan Roberts created YARN-5202:


 Summary: Dynamic Overcommit of Node Resources - POC
 Key: YARN-5202
 URL: https://issues.apache.org/jira/browse/YARN-5202
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, resourcemanager
Affects Versions: 3.0.0-alpha1
Reporter: Nathan Roberts
Assignee: Nathan Roberts


This Jira is to present a proof-of-concept implementation (collaboration 
between [~jlowe] and myself) of a dynamic over-commit implementation in YARN.  
The type of over-commit implemented in this jira is similar to but not as 
full-featured as what's being implemented via YARN-1011. YARN-1011 is where we 
see ourselves heading but we needed something quick and completely transparent 
so that we could test it at scale with our varying workloads (mainly MapReduce, 
Spark, and Tez). Doing so has shed some light on how much additional capacity 
we can achieve with over-commit approaches, and has fleshed out some of the 
problems these approaches will face.

Primary design goals:
- Avoid changing protocols, application frameworks, or core scheduler logic,  - 
simply adjust individual nodes' available resources based on current node 
utilization and then let scheduler do what it normally does
- Over-commit slowly, pull back aggressively - If things are looking good and 
there is demand, slowly add resource. If memory starts to look over-utilized, 
aggressively reduce the amount of over-commit.
- Make sure the nodes protect themselves - i.e. if memory utilization on a node 
gets too high, preempt something - preferably something from a preemptable queue

A patch against trunk will be attached shortly.  Some notes on the patch:
- This feature was originally developed against something akin to 2.7.  Since 
the patch is mainly to explain the approach, we didn't do any sort of testing 
against trunk except for basic build and basic unit tests
- The key pieces of functionality are in {{SchedulerNode}}, 
{{AbstractYarnScheduler}}, and {{NodeResourceMonitorImpl}}. The remainder of 
the patch is mainly UI, Config, Metrics, Tests, and some minor code duplication 
(e.g. to optimize node resource changes we treat an over-commit resource change 
differently than an updateNodeResource change - i.e. remove_node/add_node is 
just too expensive for the frequency of over-commit changes)
- We only over-commit memory at this point. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread larry mccay
inline


On Mon, Jun 6, 2016 at 4:36 PM, Vinod Kumar Vavilapalli 
wrote:

> Folks,
>
> It is truly disappointing how we are escalating situations that can be
> resolved through basic communication.
>
> Things that shouldn’t have happened
> - After a few objections were raised, commits should have simply stopped
> before restarting again but only after consensus
> - Reverts (or revert and move to a feature-branch) shouldn’t have been
> unequivocally done without dropping a note / informing everyone / building
> consensus. And no, not even a release-manager gets this free pass. Not on
> branch-2, not on trunk, not anywhere.
> - Freaking out on -1’s and reverts - we as a community need to be less
> stigmatic about -1s / reverts.
>
>
Agreed.


> Trunk releases:
> This is the other important bit about huge difference of
> expectations between the two sides w.r.t trunk and branching. Till now,
> we’ve never made releases out of trunk. So in-progress features that people
> deemed to not need a feature branch could go into trunk without much
> trouble. Given that we are now making releases off trunk, I can see (a) the
> RM saying "no, don’t put in-progress stuff and (b) the contributors saying
> “no we don’t want the overhead of a branch”. I’ve raised related topics
> (but only focusing on incompatible changes) before -
> http://markmail.org/message/m6x73t6srlchywsn <
> http://markmail.org/message/m6x73t6srlchywsn> - but we never decided
> anything.
>
> We need to at the least force a reset of expectations w.r.t how trunk and
> small / medium / incompatible changes there are treated. We should hold off
> making a release off trunk before this gets fully discussed in the
> community and we all reach a consensus.
>

+1

In essence, by moving commits to a feature branch so that we can release
from trunk is creating a "trunk-branch". :)


> > * Without a user API, there's no way for people to use it, so not much
> > advantage to having it in a release
> >
> > Since the code is separate and probably won't break any existing code, I
> > won't -1 if you want to include this in a release without a user API, but
> > again, I question the utility of including code that can't be used.
>
> Clearly, there are two sides to this argument. One side claims the absence
> of user-facing public / stable APIs, and that for all purposes this is
> dead-code for everyone other than the few early adopters who want to
> experiment with it. The other argument is to not put this code before a
> user API. Again, I’d discuss with fellow community members before making
> what the other side perceives as unacceptable moves.
>
> From 2.8.0 perspective, it shouldn’t have landed there in the first place
> - I have been pushing for a release for a while with help only from a few
> members of the community. But if you say that it has no material impact on
> the user story, having a by-default switched-off feature that *doesn’t*
> destabilize the core release, I’d be willing to let it pass.
>
> +Vinod


Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Vinod Kumar Vavilapalli
Folks,

It is truly disappointing how we are escalating situations that can be resolved 
through basic communication.

Things that shouldn’t have happened
- After a few objections were raised, commits should have simply stopped before 
restarting again but only after consensus
- Reverts (or revert and move to a feature-branch) shouldn’t have been 
unequivocally done without dropping a note / informing everyone / building 
consensus. And no, not even a release-manager gets this free pass. Not on 
branch-2, not on trunk, not anywhere.
- Freaking out on -1’s and reverts - we as a community need to be less 
stigmatic about -1s / reverts.

Trunk releases:
This is the other important bit about huge difference of expectations 
between the two sides w.r.t trunk and branching. Till now, we’ve never made 
releases out of trunk. So in-progress features that people deemed to not need a 
feature branch could go into trunk without much trouble. Given that we are now 
making releases off trunk, I can see (a) the RM saying "no, don’t put 
in-progress stuff and (b) the contributors saying “no we don’t want the 
overhead of a branch”. I’ve raised related topics (but only focusing on 
incompatible changes) before - http://markmail.org/message/m6x73t6srlchywsn 
 - but we never decided anything.

We need to at the least force a reset of expectations w.r.t how trunk and small 
/ medium / incompatible changes there are treated. We should hold off making a 
release off trunk before this gets fully discussed in the community and we all 
reach a consensus.

> * Without a user API, there's no way for people to use it, so not much
> advantage to having it in a release
> 
> Since the code is separate and probably won't break any existing code, I
> won't -1 if you want to include this in a release without a user API, but
> again, I question the utility of including code that can't be used.

Clearly, there are two sides to this argument. One side claims the absence of 
user-facing public / stable APIs, and that for all purposes this is dead-code 
for everyone other than the few early adopters who want to experiment with it. 
The other argument is to not put this code before a user API. Again, I’d 
discuss with fellow community members before making what the other side 
perceives as unacceptable moves.

From 2.8.0 perspective, it shouldn’t have landed there in the first place - I 
have been pushing for a release for a while with help only from a few members 
of the community. But if you say that it has no material impact on the user 
story, having a by-default switched-off feature that *doesn’t* destabilize the 
core release, I’d be willing to let it pass.

+Vinod

Re: Why there are so many revert operations on trunk?

2016-06-06 Thread larry mccay
This seems like something that is going to probably happen again if we
continue to cut releases from trunk.
I know that this has been discussed at length in a separate thread but I
think it would be good to recognize that it is the core of the issue here.

Either we:

* need to define what will happen on trunk in such circumstances and
clearly communicate an action before taking it on the dev@ list or
* we need to not introduce this sort of thrashing on trunk by releasing
from it directly

My humble 2 cents...

--larry


On Mon, Jun 6, 2016 at 1:56 PM, Andrew Wang 
wrote:

> To clarify what happened here, I moved the commits to a feature branch, not
> just reverting the commits. The intent was to make it easy to merge back in
> later, and also to unblock the 2.8 and 3.0 releases we've been trying very
> hard to wrap up for weeks. This doesn't slow down development since you can
> keep committing to a branch, and I did the git work to make it easy to
> merge back in alter. I'm also happy to review the merge if the concern is
> getting three +1s.
>
> In the comments on HDFS-9924, you can see comments from a month ago raising
> concerns about the API and also that this significant expansion of the HDFS
> API is being done on release branches. There is an explicit -1 on continued
> commits to trunk, and a request to move the commits to a feature branch.
> Similar concerns have been raised by multiple contributors on that JIRA.
> Yet, the commits remained in release branches, and new patches continued to
> be committed to release branches.
>
> There's no need to attribute malicious intent to slow down feature
> development; for some reason I keep seeing this accusation thrown around
> when there are many people chiming in on HDFS-9924 with concerns about the
> feature. Considering how it's expanding the HDFS API, this is also the kind
> of work that should go through a merge vote anyway to get more eyes on it.
>
> We've been converging on the API requirements, but until the user-facing
> API is ready, I don't see the advantage of having this code in a release
> branch. As noted by the contributors on this JIRA, it's new separate code,
> so there's little to no overhead to keeping a feature branch in sync.
>
> So, to sum it up, I moved these commits to a branch because:
>
> * The discussion about the user API is still ongoing, and there is
> currently no user-facing API
> * We are very late in the 2.8 and 3.0 release cycles, trying to do blocker
> burndown
> * This code is separate and thus easy to keep in sync on a branch and merge
> in later
> * Without a user API, there's no way for people to use it, so not much
> advantage to having it in a release
>
> Since the code is separate and probably won't break any existing code, I
> won't -1 if you want to include this in a release without a user API, but
> again, I question the utility of including code that can't be used.
>
> Thanks,
> Andrew
>


[jira] [Created] (YARN-5201) Apache Ranger Yarn policies are not used

2016-06-06 Thread Rajendranath Rengan (JIRA)
Rajendranath Rengan created YARN-5201:
-

 Summary: Apache Ranger Yarn policies are not used
 Key: YARN-5201
 URL: https://issues.apache.org/jira/browse/YARN-5201
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rajendranath Rengan


Hi,

I have setup Apache Ranger in hadoop cluster and defined yarn policies to allow 
certain user to certain queue. 
Idea is to have user 'x' submit spark job only to queue 'x' and not to queue 
'y'. when submitting spark job queue is mentioned as one of the arguments
But user 'x' is able to submit spark job to queue 'y'

Ranger audit logs shows the policy used is HDFS policy 
Yarn policy is not used at all.

I have enabled ranger plugin for YARN and defined yarn policy

Yarn ACL is also set to true

capacity scheduler setting as below:
yarn.scheduler.capacity.queue-mappings=u:user1:user1,u:user2:userr2
yarn.scheduler.capacity.root.acl_submit_applications=yarn,spark,hdfs
yarn.scheduler.capacity.root.customer1.acl_administer_jobs=user1
yarn.scheduler.capacity.root.customer1.acl_submit_applications=user1
yarn.scheduler.capacity.root.customer1.capacity=50
yarn.scheduler.capacity.root.customer1.maximum-capacity=100
yarn.scheduler.capacity.root.customer1.state=RUNNING
yarn.scheduler.capacity.root.customer1.user-limit-factor=1
yarn.scheduler.capacity.root.customer2.acl_administer_jobs=user2
yarn.scheduler.capacity.root.customer2.acl_submit_applications=user2
yarn.scheduler.capacity.root.customer2.capacity=50
yarn.scheduler.capacity.root.customer2.maximum-capacity=100
yarn.scheduler.capacity.root.customer2.state=RUNNING
yarn.scheduler.capacity.root.customer2.user-limit-factor=1
yarn.scheduler.capacity.root.queues=user1,user2

Thanks 
Rengan




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Andrew Wang
To clarify what happened here, I moved the commits to a feature branch, not
just reverting the commits. The intent was to make it easy to merge back in
later, and also to unblock the 2.8 and 3.0 releases we've been trying very
hard to wrap up for weeks. This doesn't slow down development since you can
keep committing to a branch, and I did the git work to make it easy to
merge back in alter. I'm also happy to review the merge if the concern is
getting three +1s.

In the comments on HDFS-9924, you can see comments from a month ago raising
concerns about the API and also that this significant expansion of the HDFS
API is being done on release branches. There is an explicit -1 on continued
commits to trunk, and a request to move the commits to a feature branch.
Similar concerns have been raised by multiple contributors on that JIRA.
Yet, the commits remained in release branches, and new patches continued to
be committed to release branches.

There's no need to attribute malicious intent to slow down feature
development; for some reason I keep seeing this accusation thrown around
when there are many people chiming in on HDFS-9924 with concerns about the
feature. Considering how it's expanding the HDFS API, this is also the kind
of work that should go through a merge vote anyway to get more eyes on it.

We've been converging on the API requirements, but until the user-facing
API is ready, I don't see the advantage of having this code in a release
branch. As noted by the contributors on this JIRA, it's new separate code,
so there's little to no overhead to keeping a feature branch in sync.

So, to sum it up, I moved these commits to a branch because:

* The discussion about the user API is still ongoing, and there is
currently no user-facing API
* We are very late in the 2.8 and 3.0 release cycles, trying to do blocker
burndown
* This code is separate and thus easy to keep in sync on a branch and merge
in later
* Without a user API, there's no way for people to use it, so not much
advantage to having it in a release

Since the code is separate and probably won't break any existing code, I
won't -1 if you want to include this in a release without a user API, but
again, I question the utility of including code that can't be used.

Thanks,
Andrew


Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Jitendra Pandey
Colin raised the -1 demanding a design document. The document was added the 
very next day. There were constructive discussions on the design. There was a 
demand for listenable future or futures with callback, which was accepted to 
accommodate. Rest of the work having been completed, there was no need to 
revert. Andrew’s objection was primarily against releasing in 2.8 without the 
aforementioned change in API, which is reasonable and, IMO, it should be 
possible to make the above improvement within 2.8 timeline. 

On Jun 6, 2016, at 10:13 AM, Chris Douglas  wrote:

> Reading through HDFS-9924, a request for a design doc- and a -1 on
> committing to trunk- was raised in mid-May, but commits to trunk
> continued. Why is that? Shouldn't this have paused while the details
> were discussed? Branching is neutral to the pace of feature
> development, but consensus on the result is required. Working through
> possibilities in a branch- or in multiple branches- seems like a
> reasonable way to determine which approach has support and code to
> back it.
> 
> Reverting code is not "illegal"; the feature will be in/out of any
> release by appealing to bylaws. Our rules exist to facilitate
> consensus, not declare it a fiat accompli.
> 
> An RM only exists by creating an RC. Someone can declare themselves
> Grand Marshall of trunk and stomp around in a fancy hat, but it
> doesn't affect anything. -C
> 
> 
> On Mon, Jun 6, 2016 at 9:36 AM, Junping Du  wrote:
>> Thanks Aaron for pointing it out. I didn't see any consensus on HDFS-9924 so 
>> I think we should bring it here with broader audiences for more discussions.
>> 
>> I saw several very bad practices here:
>> 
>> 1. committer (no need to say who) revert all commits from trunk without 
>> making consensus with all related contributors/committers.
>> 
>> 2. Someone's comments on feature branch are very misleading... If I didn't 
>> remember wrong, feature development doesn't have to go through feature 
>> branch which is just an optional process. This creative process of feature 
>> branch and branch committer - I believe the intention is trying to 
>> accelerate features development but not to slow them down.
>> 
>> 3. Someone (again, no need to say who) seems to claim himself as RM for 
>> trunk. I don't think we need any RM for trunk. Even for RM of 3.0.0-alpha, I 
>> think we need someone else who demonstrates he/she is more responsible, work 
>> hardly and carefully and open communication with all community. Only through 
>> this, the success of Hadoop in age of 3.0 are guranteed.
>> 
>> 
>> Thanks,
>> 
>> 
>> Junping
>> 
>> 
>> 
>> From: Aaron T. Myers 
>> Sent: Monday, June 06, 2016 4:46 PM
>> To: Junping Du
>> Cc: Andrew Wang; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
>> mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
>> Subject: Re: Why there are so many revert operations on trunk?
>> 
>> Junping,
>> 
>> All of this is being discussed on HDFS-9924. Suggest you follow the 
>> conversation there.
>> 
>> --
>> Aaron T. Myers
>> Software Engineer, Cloudera
>> 
>> On Mon, Jun 6, 2016 at 7:20 AM, Junping Du 
>> > wrote:
>> Hi Andrew,
>> 
>> I just noticed you revert 8 commits on trunk last Friday:
>> 
>> HADOOP-13226
>> 
>> HDFS-10430
>> 
>> HDFS-10431
>> 
>> HDFS-10390
>> 
>> HADOOP-13168
>> 
>> HDFS-10390
>> 
>> HADOOP-13168
>> 
>> HDFS-10346
>> 
>> HADOOP-12957
>> 
>> HDFS-10224
>> 
>>   And I didn't see you have any comments on JIRA or email discussion before 
>> you did this. I don't think we are legally allowed to do this even as 
>> committer/PMC member. Can you explain what's your intention to do this?
>> 
>>   BTW, thanks Nicolas to revert all these "illegal" revert operations.
>> 
>> 
>> 
>> Thanks,
>> 
>> 
>> Junping
>> 
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
> 


-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Chris Douglas
Reading through HDFS-9924, a request for a design doc- and a -1 on
committing to trunk- was raised in mid-May, but commits to trunk
continued. Why is that? Shouldn't this have paused while the details
were discussed? Branching is neutral to the pace of feature
development, but consensus on the result is required. Working through
possibilities in a branch- or in multiple branches- seems like a
reasonable way to determine which approach has support and code to
back it.

Reverting code is not "illegal"; the feature will be in/out of any
release by appealing to bylaws. Our rules exist to facilitate
consensus, not declare it a fiat accompli.

An RM only exists by creating an RC. Someone can declare themselves
Grand Marshall of trunk and stomp around in a fancy hat, but it
doesn't affect anything. -C


On Mon, Jun 6, 2016 at 9:36 AM, Junping Du  wrote:
> Thanks Aaron for pointing it out. I didn't see any consensus on HDFS-9924 so 
> I think we should bring it here with broader audiences for more discussions.
>
> I saw several very bad practices here:
>
> 1. committer (no need to say who) revert all commits from trunk without 
> making consensus with all related contributors/committers.
>
> 2. Someone's comments on feature branch are very misleading... If I didn't 
> remember wrong, feature development doesn't have to go through feature branch 
> which is just an optional process. This creative process of feature branch 
> and branch committer - I believe the intention is trying to accelerate 
> features development but not to slow them down.
>
> 3. Someone (again, no need to say who) seems to claim himself as RM for 
> trunk. I don't think we need any RM for trunk. Even for RM of 3.0.0-alpha, I 
> think we need someone else who demonstrates he/she is more responsible, work 
> hardly and carefully and open communication with all community. Only through 
> this, the success of Hadoop in age of 3.0 are guranteed.
>
>
> Thanks,
>
>
> Junping
>
>
> 
> From: Aaron T. Myers 
> Sent: Monday, June 06, 2016 4:46 PM
> To: Junping Du
> Cc: Andrew Wang; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
> mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
> Subject: Re: Why there are so many revert operations on trunk?
>
> Junping,
>
> All of this is being discussed on HDFS-9924. Suggest you follow the 
> conversation there.
>
> --
> Aaron T. Myers
> Software Engineer, Cloudera
>
> On Mon, Jun 6, 2016 at 7:20 AM, Junping Du 
> > wrote:
> Hi Andrew,
>
>  I just noticed you revert 8 commits on trunk last Friday:
>
> HADOOP-13226
>
> HDFS-10430
>
> HDFS-10431
>
> HDFS-10390
>
> HADOOP-13168
>
> HDFS-10390
>
> HADOOP-13168
>
> HDFS-10346
>
> HADOOP-12957
>
> HDFS-10224
>
>And I didn't see you have any comments on JIRA or email discussion before 
> you did this. I don't think we are legally allowed to do this even as 
> committer/PMC member. Can you explain what's your intention to do this?
>
>BTW, thanks Nicolas to revert all these "illegal" revert operations.
>
>
>
> Thanks,
>
>
> Junping
>

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Junping Du
Thanks Aaron for pointing it out. I didn't see any consensus on HDFS-9924 so I 
think we should bring it here with broader audiences for more discussions.

I saw several very bad practices here:

1. committer (no need to say who) revert all commits from trunk without making 
consensus with all related contributors/committers.

2. Someone's comments on feature branch are very misleading... If I didn't 
remember wrong, feature development doesn't have to go through feature branch 
which is just an optional process. This creative process of feature branch and 
branch committer - I believe the intention is trying to accelerate features 
development but not to slow them down.

3. Someone (again, no need to say who) seems to claim himself as RM for trunk. 
I don't think we need any RM for trunk. Even for RM of 3.0.0-alpha, I think we 
need someone else who demonstrates he/she is more responsible, work hardly and 
carefully and open communication with all community. Only through this, the 
success of Hadoop in age of 3.0 are guranteed.


Thanks,


Junping



From: Aaron T. Myers 
Sent: Monday, June 06, 2016 4:46 PM
To: Junping Du
Cc: Andrew Wang; common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-dev@hadoop.apache.org
Subject: Re: Why there are so many revert operations on trunk?

Junping,

All of this is being discussed on HDFS-9924. Suggest you follow the 
conversation there.

--
Aaron T. Myers
Software Engineer, Cloudera

On Mon, Jun 6, 2016 at 7:20 AM, Junping Du 
> wrote:
Hi Andrew,

 I just noticed you revert 8 commits on trunk last Friday:

HADOOP-13226

HDFS-10430

HDFS-10431

HDFS-10390

HADOOP-13168

HDFS-10390

HADOOP-13168

HDFS-10346

HADOOP-12957

HDFS-10224

   And I didn't see you have any comments on JIRA or email discussion before 
you did this. I don't think we are legally allowed to do this even as 
committer/PMC member. Can you explain what's your intention to do this?

   BTW, thanks Nicolas to revert all these "illegal" revert operations.



Thanks,


Junping



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-06-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/

No changes




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   org.apache.hadoop.minikdc.MiniKdc.stop() calls Thread.sleep() with a 
lock held At MiniKdc.java:lock held At MiniKdc.java:[line 345] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   Redundant nullcheck of execTypeRequest, which is known to be non-null in 
org.apache.hadoop.yarn.api.records.ResourceRequest.equals(Object) Redundant 
null check at ResourceRequest.java:is known to be non-null in 
org.apache.hadoop.yarn.api.records.ResourceRequest.equals(Object) Redundant 
null check at ResourceRequest.java:[line 361] 

Failed junit tests :

   hadoop.net.TestDNS 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.yarn.server.resourcemanager.TestClientRMTokens 
   hadoop.yarn.server.resourcemanager.TestAMAuthorization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestDistributedScheduling 
   hadoop.yarn.client.TestGetGroups 
   hadoop.mapred.TestMiniMRChildTask 
   hadoop.mapred.TestMRCJCFileOutputCommitter 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
   org.apache.hadoop.yarn.client.cli.TestYarnCLI 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [116K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [908K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [92K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: 

Why there are so many revert operations on trunk?

2016-06-06 Thread Junping Du
Hi Andrew,

 I just noticed you revert 8 commits on trunk last Friday:

HADOOP-13226

HDFS-10430

HDFS-10431

HDFS-10390

HADOOP-13168

HDFS-10390

HADOOP-13168

HDFS-10346

HADOOP-12957

HDFS-10224

   And I didn't see you have any comments on JIRA or email discussion before 
you did this. I don't think we are legally allowed to do this even as 
committer/PMC member. Can you explain what's your intention to do this?

   BTW, thanks Nicolas to revert all these "illegal" revert operations.



Thanks,


Junping


Hadoop-Yarn-trunk-Java8 - Build # 1541 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1541/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 34397 lines...]
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop YARN . SUCCESS [  3.435 s]
[INFO] Apache Hadoop YARN API . SUCCESS [01:34 min]
[INFO] Apache Hadoop YARN Common .. SUCCESS [03:01 min]
[INFO] Apache Hadoop YARN Server .. SUCCESS [  0.061 s]
[INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 37.550 s]
[INFO] Apache Hadoop YARN NodeManager . SUCCESS [11:15 min]
[INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 18.807 s]
[INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [03:15 min]
[INFO] Apache Hadoop YARN ResourceManager . SUCCESS [36:42 min]
[INFO] Apache Hadoop YARN Server Tests  SUCCESS [02:08 min]
[INFO] Apache Hadoop YARN Client .. FAILURE [07:14 min]
[INFO] Apache Hadoop YARN SharedCacheManager .. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage . SKIPPED
[INFO] Apache Hadoop YARN Applications  SUCCESS [  0.035 s]
[INFO] Apache Hadoop YARN DistributedShell  SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SKIPPED
[INFO] Apache Hadoop YARN Site  SUCCESS [  0.038 s]
[INFO] Apache Hadoop YARN Registry  SUCCESS [ 47.295 s]
[INFO] Apache Hadoop YARN Project . SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:07 h
[INFO] Finished at: 2016-06-06T10:44:33+00:00
[INFO] Final Memory: 145M/4438M
[INFO] 
[WARNING] The requested profile "docs" could not be activated because it does 
not exist.
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-client: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Yarn-trunk-Java8/source/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-client
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: yarn-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  
org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByInvalidTimeInterval

Error Message:
Exhausted attempts in checking if node capacity was added to the plan

Stack Trace:
java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
added to the plan
at org.junit.Assert.fail(Assert.java:88)
at 

Build failed in Jenkins: Hadoop-Yarn-trunk-Java8 #1541

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[szetszwo] Revert "Revert "HDFS-10224. Implement asynchronous rename for

[szetszwo] Revert "Revert "HADOOP-12957. Limit the number of outstanding async

[szetszwo] Revert "Revert "HDFS-10346. Implement asynchronous

[szetszwo] Revert "Revert "HADOOP-13168. Support Future.get with timeout in ipc

[szetszwo] Revert "Revert "HDFS-10390. Implement asynchronous 
setAcl/getAclStatus

[szetszwo] Revert "Revert "HDFS-10431 Refactor and speedup TestAsyncDFSRename. 

[szetszwo] Revert "Revert "HDFS-10430. Reuse FileSystem#access in TestAsyncDFS.

[szetszwo] Revert "Revert "HADOOP-13226 Support async call retry and failover.""

--
[...truncated 34200 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

52 warnings
[WARNING] Javadoc Warnings
[WARNING] 
:73:
 warning: no description for @throws
[WARNING] * @throws Exception
[WARNING] ^
[WARNING]