Build failed in Jenkins: Hadoop-Common-trunk #1331

2014-12-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1331/

--
[...truncated 73033 lines...]
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: tomcat.version - 6.0.41
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: build.platform - Linux-amd64-64
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: jackson.version - 1.9.13
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: jackson2.version - 2.2.3
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/src/main/java
Setting project property: project.build.testSourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/src/test/java
Setting project property: localRepository -id: local
  url: file:///home/jenkins/.m2/repository/
   layout: default
snapshots: [enabled = true, update = always]
 releases: [enabled = true, update = always]
Setting project property: settings.localRepository - 
/home/jenkins/.m2/repository
Setting project property: maven.project.dependencies.versions - 
[INFO] Executing tasks
Build sequence for target(s) `main' is [main]
Complete build sequence is [main, ]

main:
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-dir
[mkdir] Skipping 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-dir
 because it already exists.
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-source-plugin:2.1.2:jar-no-fork from plugin 
realm ClassRealm[pluginorg.apache.maven.plugins:maven-source-plugin:2.1.2, 
parent: sun.misc.Launcher$AppClassLoader@1a2b2cf8]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-source-plugin:2.1.2:jar-no-fork' with basic 
configurator --
[DEBUG]   (f) attach = true
[DEBUG]   (f) defaultManifestFile = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/classes/META-INF/MANIFEST.MF
[DEBUG]   (f) excludeResources = false
[DEBUG]   (f) finalName = hadoop-common-project-3.0.0-SNAPSHOT
[DEBUG]   (f) forceCreation = false
[DEBUG]   (f) includePom = false
[DEBUG]   (f) outputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target
[DEBUG]   (f) project = MavenProject: 

Re: Switching to Java 7

2014-12-08 Thread Steve Loughran
yes, bumped them up to

export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
export ANT_OPTS=$MAVEN_OPTS

also extended test runs times.



On 8 December 2014 at 00:58, Ted Yu yuzhih...@gmail.com wrote:

 Looking at the test failures of
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk 1.7:

 e.g.

 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/

 java.lang.OutOfMemoryError: Java heap space
 at sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:120)
 at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:68)
 at
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
 at
 io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
 at io.netty.channel.nio.NioEventLoop.init(NioEventLoop.java:120)
 at
 io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
 at
 io.netty.util.concurrent.MultithreadEventExecutorGroup.init(MultithreadEventExecutorGroup.java:64)


 Should more heap be given to the tests ?


 Cheers


 On Sun, Dec 7, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
 wrote:

  The latest migration status:
 
if the jenkins builds are happy then the patch will go in -I do that
  monday morning 10:00 UTC
 
  https://builds.apache.org/view/H-L/view/Hadoop/
 
  Getting jenkins to work has been surprisingly difficult...it turns out
  that those builds which we thought were java7 or java8 weren't, as
 setting
export JAVA_HOME=${TOOLS_HOME}/java/latest
 
  meant that they picked up a java 6 machine
 
  Now the trunk precommit/postcommit and scheduled branches should have
  export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
 
  the Java 8 builds have more changes
 
  export JAVA_HOME=${TOOLS_HOME}/java/jdk1.8.0
  export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
  and  -Dmaven.javadoc.skip=true  on the mvn builds
 
  without these javadocs fails and test runs OOM.
 
  We need to have something resembling the nightly build env setup again,
  git/Svn stored file with something for java8 alongside the normal env
 vars.
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Build failed in Jenkins: Hadoop-Common-trunk #1332

2014-12-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1332/changes

Changes:

[harsh] MAPREDUCE-6177. Minor typo in the EncryptedShuffle document about 
ssl-client.xml. Contributed by Yangping Wu. (harsh)

--
[...truncated 73366 lines...]
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: tomcat.version - 6.0.41
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: build.platform - Linux-amd64-64
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: jackson.version - 1.9.13
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: jackson2.version - 2.2.3
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/src/main/java
Setting project property: project.build.testSourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/src/test/java
Setting project property: localRepository -id: local
  url: file:///home/jenkins/.m2/repository/
   layout: default
snapshots: [enabled = true, update = always]
 releases: [enabled = true, update = always]
Setting project property: settings.localRepository - 
/home/jenkins/.m2/repository
Setting project property: maven.project.dependencies.versions - 
[INFO] Executing tasks
Build sequence for target(s) `main' is [main]
Complete build sequence is [main, ]

main:
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-dir
[mkdir] Skipping 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/test-dir
 because it already exists.
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-source-plugin:2.1.2:jar-no-fork from plugin 
realm ClassRealm[pluginorg.apache.maven.plugins:maven-source-plugin:2.1.2, 
parent: sun.misc.Launcher$AppClassLoader@231a6631]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-source-plugin:2.1.2:jar-no-fork' with basic 
configurator --
[DEBUG]   (f) attach = true
[DEBUG]   (f) defaultManifestFile = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/target/classes/META-INF/MANIFEST.MF
[DEBUG]   (f) excludeResources = false
[DEBUG]   (f) finalName = hadoop-common-project-3.0.0-SNAPSHOT
[DEBUG]   (f) forceCreation = false
[DEBUG]   (f) includePom = false
[DEBUG]   (f) 

Solaris Port

2014-12-08 Thread malcolm

I have ported Hadoop  native libraries to Solaris 11 (both Sparc and Intel )
Oracle have agreed to release my changes to the community so that 
Solaris platforms can benefit.
Reading the HowToContribute and GitandHadoop documents, I am not 100% 
clear on how to get my changes into the main tree. I am also a Git(hub) 
newbie, and was using svn previously.


Please let me know if I am going the correct path:

1. I forked Hadoop on Github and downloaded a clone to my development 
machine.


2. The changes I made were to 2.2.0, can I still add changes to this 
branch, and hopefully get them accepted or must I migrate my changes to 
2.6 ? (On the main Hadoop download page, 2.2 is still listed as the GA 
version )


3. I understand that I should create a new branch for my changes, and 
then generate pull requests after uploading them to Github.


4. I also registered  at Jira in the understanding that I need to 
generate a Jira number for my changes, and to name my branch accordingly ?


Does all this make sense ?

Thanks,
Malcolm




[jira] [Created] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Kamil Gorlo (JIRA)
Kamil Gorlo created HADOOP-11360:


 Summary: GraphiteSink reports data with wrong timestamp
 Key: HADOOP-11360
 URL: https://issues.apache.org/jira/browse/HADOOP-11360
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Kamil Gorlo


I've tried to use GraphiteSink with metrics2 system, but it looks that 
timestamp sent to Graphite is refreshed rarely (about every 2 minutes approx.) 
no mather how small period is set.

Here is my configuration:

*.sink.graphite.server_host=graphite-relay.host
*.sink.graphite.server_port=2013
*.sink.graphite.metrics_prefix=graphite.warehouse-data-1
*.period=10
nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink

And here is dumpet network traffic to graphite-relay.host (only select lines, 
every line appears in 10 seconds as period suggests):

graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 4 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728


As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
timestamp is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #1333

2014-12-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1333/

--
[...truncated 8266 lines...]
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-kms 
---
[INFO] Compiling 14 source files to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/classes
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-web-xmls) @ hadoop-kms ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/test-classes/kms-webapp
 [copy] Copying 1 file to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/test-classes/kms-webapp
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-kms ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-kms ---
[INFO] Compiling 6 source files to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-kms ---
[WARNING] The parameter forkMode is deprecated since version 2.14. Use 
forkCount and reuseForks instead.
[INFO] Surefire report directory: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.crypto.key.kms.server.TestKMS
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.712 sec - 
in org.apache.hadoop.crypto.key.kms.server.TestKMS
Running org.apache.hadoop.crypto.key.kms.server.TestKMSAudit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.559 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKMSAudit
Running org.apache.hadoop.crypto.key.kms.server.TestKeyAuthorizationKeyProvider
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.265 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKeyAuthorizationKeyProvider
Running org.apache.hadoop.crypto.key.kms.server.TestKMSWithZK
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.755 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKMSWithZK
Running org.apache.hadoop.crypto.key.kms.server.TestKMSACLs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.705 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKMSACLs

Results :

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-war-plugin:2.1:war (default-war) @ hadoop-kms ---
[INFO] Packaging webapp
[INFO] Assembling webapp [hadoop-kms] in 
[https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/kms]
[INFO] Processing war project
[INFO] Copying webapp resources 
[https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/src/main/webapp]
[INFO] Building jar: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/kms/WEB-INF/lib/hadoop-kms-3.0.0-SNAPSHOT.jar
[INFO] Webapp assembled in [129 msecs]
[INFO] Building war: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/kms.war
[INFO] WEB-INF/web.xml already added, skipping
[INFO] 
[INFO] --- maven-site-plugin:3.3:site (docs) @ hadoop-kms ---
[INFO] configuring report plugin 
org.apache.maven.plugins:maven-dependency-plugin:2.4
[INFO] 
[INFO]  maven-dependency-plugin:2.4:analyze-report (report:analyze-report) @ 
hadoop-kms 
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-kms ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-kms ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-kms 
---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-web-xmls) @ hadoop-kms ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-kms ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-kms ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO]  maven-dependency-plugin:2.4:analyze-report (report:analyze-report) @ 
hadoop-kms 
[WARNING] No project URL defined - decoration links will not be relativized!
[INFO] Rendering 

[jira] [Created] (HADOOP-11362) Test org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf timing out on java 8

2014-12-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11362:
---

 Summary: Test 
org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf 
timing out on java 8
 Key: HADOOP-11362
 URL: https://issues.apache.org/jira/browse/HADOOP-11362
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: ASF Jenkins, Java 8
Reporter: Steve Loughran


The test 
{{org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf}}
 is timing out on jenkins + Java 8.

This is probably the exec() operation. It may be transient, it may be a java 8 
+ shell problem. 

do we actually need this test in its present form? If a test for file handle 
leakage is really needed, attempting to create 64K instances of the OSRandom 
object should do it without having to resort to some printing and manual 
debugging of logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Switching to Java 7

2014-12-08 Thread Ted Yu
Looks like there was still OutOfMemoryError :

https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/

FYI

On Mon, Dec 8, 2014 at 2:42 AM, Steve Loughran ste...@hortonworks.com
wrote:

 yes, bumped them up to

 export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
 export ANT_OPTS=$MAVEN_OPTS

 also extended test runs times.



 On 8 December 2014 at 00:58, Ted Yu yuzhih...@gmail.com wrote:

  Looking at the test failures of
  https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk
 1.7:
 
  e.g.
 
 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/
 
  java.lang.OutOfMemoryError: Java heap space
  at
 sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:120)
  at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:68)
  at
 
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
  at
  io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
  at
 io.netty.channel.nio.NioEventLoop.init(NioEventLoop.java:120)
  at
 
 io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
  at
 
 io.netty.util.concurrent.MultithreadEventExecutorGroup.init(MultithreadEventExecutorGroup.java:64)
 
 
  Should more heap be given to the tests ?
 
 
  Cheers
 
 
  On Sun, Dec 7, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
  wrote:
 
   The latest migration status:
  
 if the jenkins builds are happy then the patch will go in -I do that
   monday morning 10:00 UTC
  
   https://builds.apache.org/view/H-L/view/Hadoop/
  
   Getting jenkins to work has been surprisingly difficult...it turns
 out
   that those builds which we thought were java7 or java8 weren't, as
  setting
 export JAVA_HOME=${TOOLS_HOME}/java/latest
  
   meant that they picked up a java 6 machine
  
   Now the trunk precommit/postcommit and scheduled branches should have
   export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
  
   the Java 8 builds have more changes
  
   export JAVA_HOME=${TOOLS_HOME}/java/jdk1.8.0
   export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
   and  -Dmaven.javadoc.skip=true  on the mvn builds
  
   without these javadocs fails and test runs OOM.
  
   We need to have something resembling the nightly build env setup again,
   git/Svn stored file with something for java8 alongside the normal env
  vars.
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



[jira] [Resolved] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10530.
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Incompatible change

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, HADOOP-10530-004.patch, HADOOP-10530-005.patch, 
 HADOOP-10530-debug.000.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Switching to Java 7

2014-12-08 Thread Steve Loughran
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:

 Looks like there was still OutOfMemoryError :


 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/


Well, I'm going to ignore that for now as it's a java 8 problem, surfacing
this weekend once the builds were actually switched to Java 8. memory size
tuning can continue.

I have now committed the Java 7+ only patch to branch-2 and up: new code
does not have to worry about java 6 compatibility unless they plan to
backport to Java 2.6 or earlier. Having written some Java 7 code, the 
constructor for typed classes are a convenience, the multiple-catch entries
more useful, as they eliminate duplicate code in exception handling.

Getting this patch in has revealed that the Jenkins builds of hadoop are
(a) a bit of a mess and (b) prone to race conditions related to the m2
repository if 1 project builds simultaneously. The way the nightly builds
are staggered means this doesn't usually surface, but it may show up during
precommit/postcommit builds.

The switch to Java 7 as the underlying JDK appears to be triggering
failures, these are things that the projects themselves are going to have
to look at.


This then, is where we are with builds right now. This is not a consequence
of the changes to the POM; this list predates that patch. This is Jenkins
running Hadoop builds and tests with Java 7u55


*Working: *

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-branch2/
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Yarn-trunk/

*failing tests*

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Common-2-Build/
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk/
https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-YARN-Build/
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Mapreduce-trunk/

*failing tests on Java 8 (may include OOM)*

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-common-trunk-Java8/
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Mapreduce-trunk-Java8/
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Yarn-trunk-Java8/


*failing with maven internal dependency problems*

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/


*failing even though it appears to work in the logs*

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Common-trunk/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Switching to Java 7

2014-12-08 Thread Sheena O'Connell
Re g cloud,  as far as I can tell what they want is a writeup for existing
services. So we get healthstat running and then write up what we can do in
terms of data flow managment and visualisation then that should cut it.

It would be nice to put the calculator on there as software as a service,
but it needs much more polish
On 8 Dec 2014 02:59, Ted Yu yuzhih...@gmail.com wrote:

 Looking at the test failures of
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk 1.7:

 e.g.

 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/

 java.lang.OutOfMemoryError: Java heap space
 at sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:120)
 at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:68)
 at
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
 at
 io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
 at io.netty.channel.nio.NioEventLoop.init(NioEventLoop.java:120)
 at
 io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
 at
 io.netty.util.concurrent.MultithreadEventExecutorGroup.init(MultithreadEventExecutorGroup.java:64)


 Should more heap be given to the tests ?


 Cheers


 On Sun, Dec 7, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
 wrote:

  The latest migration status:
 
if the jenkins builds are happy then the patch will go in -I do that
  monday morning 10:00 UTC
 
  https://builds.apache.org/view/H-L/view/Hadoop/
 
  Getting jenkins to work has been surprisingly difficult...it turns out
  that those builds which we thought were java7 or java8 weren't, as
 setting
export JAVA_HOME=${TOOLS_HOME}/java/latest
 
  meant that they picked up a java 6 machine
 
  Now the trunk precommit/postcommit and scheduled branches should have
  export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
 
  the Java 8 builds have more changes
 
  export JAVA_HOME=${TOOLS_HOME}/java/jdk1.8.0
  export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
  and  -Dmaven.javadoc.skip=true  on the mvn builds
 
  without these javadocs fails and test runs OOM.
 
  We need to have something resembling the nightly build env setup again,
  git/Svn stored file with something for java8 alongside the normal env
 vars.
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 



Re: Switching to Java 7

2014-12-08 Thread Sheena O'Connell
Ps.  I made app.py and stuff...
On 8 Dec 2014 02:59, Ted Yu yuzhih...@gmail.com wrote:

 Looking at the test failures of
 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk 1.7:

 e.g.

 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/

 java.lang.OutOfMemoryError: Java heap space
 at sun.nio.ch.EPollArrayWrapper.init(EPollArrayWrapper.java:120)
 at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:68)
 at
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
 at
 io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
 at io.netty.channel.nio.NioEventLoop.init(NioEventLoop.java:120)
 at
 io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
 at
 io.netty.util.concurrent.MultithreadEventExecutorGroup.init(MultithreadEventExecutorGroup.java:64)


 Should more heap be given to the tests ?


 Cheers


 On Sun, Dec 7, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
 wrote:

  The latest migration status:
 
if the jenkins builds are happy then the patch will go in -I do that
  monday morning 10:00 UTC
 
  https://builds.apache.org/view/H-L/view/Hadoop/
 
  Getting jenkins to work has been surprisingly difficult...it turns out
  that those builds which we thought were java7 or java8 weren't, as
 setting
export JAVA_HOME=${TOOLS_HOME}/java/latest
 
  meant that they picked up a java 6 machine
 
  Now the trunk precommit/postcommit and scheduled branches should have
  export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
 
  the Java 8 builds have more changes
 
  export JAVA_HOME=${TOOLS_HOME}/java/jdk1.8.0
  export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
  and  -Dmaven.javadoc.skip=true  on the mvn builds
 
  without these javadocs fails and test runs OOM.
 
  We need to have something resembling the nightly build env setup again,
  git/Svn stored file with something for java8 alongside the normal env
 vars.
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 



[jira] [Created] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11363:
---

 Summary: Hadoop maven surefire-plugin uses must set heap size
 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran


Some of the hadoop tests (especially HBase) are running out of memory on Java 
8, due to there not being enough heap for them

The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs to 
be explicitly set as an argument to the test run.

I propose

# {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
for surefire builds as properties
# modules which run tests use these values for their memory  timeout settings.
# these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Switching to Java 7

2014-12-08 Thread Steve Loughran
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:

 Looks like there was still OutOfMemoryError :


 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/

 FYI


we need to set the surefire memory usage explicitly; it doesn't pick up the
maven JVM values itself.

I've created a JIRA for this.
https://issues.apache.org/jira/browse/HADOOP-11363

I'm not going to do this myself as I've had enough of Hadoop POMs and
jenkins build configs for the rest of 2014. I'll review patches though

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Build failed in Jenkins: Hadoop-Common-trunk #1334

2014-12-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1334/changes

Changes:

[stevel] HADOOP-10530 Make hadoop build on Java7+ only (stevel)

--
[...truncated 8210 lines...]
[INFO] Compiling 14 source files to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/classes
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-web-xmls) @ hadoop-kms ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/test-classes/kms-webapp
 [copy] Copying 1 file to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/test-classes/kms-webapp
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-kms ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-kms ---
[INFO] Compiling 6 source files to 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-kms ---
[WARNING] The parameter forkMode is deprecated since version 2.14. Use 
forkCount and reuseForks instead.
[INFO] Surefire report directory: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.crypto.key.kms.server.TestKMS
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.812 sec - 
in org.apache.hadoop.crypto.key.kms.server.TestKMS
Running org.apache.hadoop.crypto.key.kms.server.TestKMSAudit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.555 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKMSAudit
Running org.apache.hadoop.crypto.key.kms.server.TestKeyAuthorizationKeyProvider
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.5 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKeyAuthorizationKeyProvider
Running org.apache.hadoop.crypto.key.kms.server.TestKMSWithZK
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.894 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKMSWithZK
Running org.apache.hadoop.crypto.key.kms.server.TestKMSACLs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.682 sec - in 
org.apache.hadoop.crypto.key.kms.server.TestKMSACLs

Results :

Tests run: 29, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-war-plugin:2.1:war (default-war) @ hadoop-kms ---
[INFO] Packaging webapp
[INFO] Assembling webapp [hadoop-kms] in 
[https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/kms]
[INFO] Processing war project
[INFO] Copying webapp resources 
[https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/src/main/webapp]
[INFO] Building jar: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/kms/WEB-INF/lib/hadoop-kms-3.0.0-SNAPSHOT.jar
[INFO] Webapp assembled in [149 msecs]
[INFO] Building war: 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-kms/target/kms.war
[INFO] WEB-INF/web.xml already added, skipping
[INFO] 
[INFO] --- maven-site-plugin:3.3:site (docs) @ hadoop-kms ---
[INFO] configuring report plugin 
org.apache.maven.plugins:maven-dependency-plugin:2.4
[INFO] 
[INFO]  maven-dependency-plugin:2.4:analyze-report (report:analyze-report) @ 
hadoop-kms 
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-kms ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-kms ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-kms 
---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-web-xmls) @ hadoop-kms ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-kms ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-kms ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO]  maven-dependency-plugin:2.4:analyze-report (report:analyze-report) @ 
hadoop-kms 
[WARNING] No project URL defined - decoration links will not be relativized!
[INFO] Rendering site with org.apache.maven.skins:maven-stylus-skin:jar:1.2 
skin.
[INFO] 

[jira] [Resolved] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11360.
---
Resolution: Duplicate

Thanks for the reporting this JIRA Kamil!
This looks like a duplicate of 
https://issues.apache.org/jira/browse/HADOOP-11182 which was fixed in Hadoop 
2.6.0. If not, please re-open this JIRA

 GraphiteSink reports data with wrong timestamp
 --

 Key: HADOOP-11360
 URL: https://issues.apache.org/jira/browse/HADOOP-11360
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Kamil Gorlo

 I've tried to use GraphiteSink with metrics2 system, but it looks that 
 timestamp sent to Graphite is refreshed rarely (about every 2 minutes 
 approx.) no mather how small period is set.
 Here is my configuration:
 *.sink.graphite.server_host=graphite-relay.host
 *.sink.graphite.server_port=2013
 *.sink.graphite.metrics_prefix=graphite.warehouse-data-1
 *.period=10
 nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink
 And here is dumped network traffic to graphite-relay.host (only selected 
 lines, every line appears in 10 seconds as period suggests):
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  3 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  4 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  3 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  1 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  1 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041728
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041728
 As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
 timestamp is not.
 It looks that the problem is level above (in classes providing MetricsRecord 
 - because timestamp value is taken from MetricsRecord object provided in 
 argument to putMetrics method in Sink implementation) which implies that 
 every sink will have the same problem. Maybe I misconfigured something?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Solaris Port

2014-12-08 Thread Colin McCabe
Hi Malcolm,

It's great that you are going to contribute!  Please make your patches
against trunk.

2.2 is fairly old at this point.  It hasn't been the focus of
development in more than a year.

We don't use github or pull requests.

Check the section on http://wiki.apache.org/hadoop/HowToContribute
that talks about Contributing your work.  Excerpt:
Finally, patches should be attached to an issue report in Jira via
the Attach File link on the issue's Jira. Please add a comment that
asks for a code review following our code review checklist. Please
note that the attachment should be granted license to ASF for
inclusion in ASF works (as per the Apache License §5).

As this says, you attach the patch file to a JIRA that you have
created, and then hit submit patch.

I don't think a branch is required for this work since it is just
build fixes, right?

best,
Colin


On Mon, Dec 8, 2014 at 3:30 AM, malcolm malcolm.kaval...@oracle.com wrote:
 I have ported Hadoop  native libraries to Solaris 11 (both Sparc and Intel )
 Oracle have agreed to release my changes to the community so that Solaris
 platforms can benefit.
 Reading the HowToContribute and GitandHadoop documents, I am not 100% clear
 on how to get my changes into the main tree. I am also a Git(hub) newbie,
 and was using svn previously.

 Please let me know if I am going the correct path:

 1. I forked Hadoop on Github and downloaded a clone to my development
 machine.

 2. The changes I made were to 2.2.0, can I still add changes to this branch,
 and hopefully get them accepted or must I migrate my changes to 2.6 ? (On
 the main Hadoop download page, 2.2 is still listed as the GA version )

 3. I understand that I should create a new branch for my changes, and then
 generate pull requests after uploading them to Github.

 4. I also registered  at Jira in the understanding that I need to generate a
 Jira number for my changes, and to name my branch accordingly ?

 Does all this make sense ?

 Thanks,
 Malcolm




Re: Switching to Java 7

2014-12-08 Thread Colin McCabe
On Mon, Dec 8, 2014 at 7:46 AM, Steve Loughran ste...@hortonworks.com wrote:
 On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:

 Looks like there was still OutOfMemoryError :


 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/


 Well, I'm going to ignore that for now as it's a java 8 problem, surfacing
 this weekend once the builds were actually switched to Java 8. memory size
 tuning can continue.

 I have now committed the Java 7+ only patch to branch-2 and up: new code
 does not have to worry about java 6 compatibility unless they plan to
 backport to Java 2.6 or earlier. Having written some Java 7 code, the 
 constructor for typed classes are a convenience, the multiple-catch entries
 more useful, as they eliminate duplicate code in exception handling.

 Getting this patch in has revealed that the Jenkins builds of hadoop are
 (a) a bit of a mess and (b) prone to race conditions related to the m2
 repository if 1 project builds simultaneously. The way the nightly builds
 are staggered means this doesn't usually surface, but it may show up during
 precommit/postcommit builds.

It would be nice if we could have a separate .m2 directory per test executor.

It seems like that would eliminate these race conditions once and for
all, at the cost of storing a few extra jars (proportional to the # of
simultaneous executors)

best,
Colin



 The switch to Java 7 as the underlying JDK appears to be triggering
 failures, these are things that the projects themselves are going to have
 to look at.


 This then, is where we are with builds right now. This is not a consequence
 of the changes to the POM; this list predates that patch. This is Jenkins
 running Hadoop builds and tests with Java 7u55


 *Working: *

 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-branch2/
 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Yarn-trunk/

 *failing tests*

 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Common-2-Build/
 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk/
 https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-YARN-Build/
 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Mapreduce-trunk/

 *failing tests on Java 8 (may include OOM)*

 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-common-trunk-Java8/
 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/
 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Mapreduce-trunk-Java8/
 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Yarn-trunk-Java8/


 *failing with maven internal dependency problems*

 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/


 *failing even though it appears to work in the logs*

 https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Common-trunk/

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


Re: Switching to Java 7

2014-12-08 Thread Steve Loughran
On 8 December 2014 at 19:58, Colin McCabe cmcc...@alumni.cmu.edu wrote:

 It would be nice if we could have a separate .m2 directory per test
 executor.

 It seems like that would eliminate these race conditions once and for
 all, at the cost of storing a few extra jars (proportional to the # of
 simultaneous executors)

 best,
 Colin


all we should really need to do is to force a unique build ID on every
build so that it's not the 3.0.0-SNAPSHOT that is needed. We'd have the
problem of repository-cruft-bloat instead.

Returning to the build, hadoop-hdfs is going OOM on test runs even on java
7; there is a patch which should fix this

https://issues.apache.org/jira/browse/HADOOP-11363

Jenkins is down right now  can't verify —could someone have a look @ this
and apply if they are happy.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Thinking ahead to hadoop-2.7

2014-12-08 Thread Steve Loughran
On 8 December 2014 at 19:48, Colin McCabe cmcc...@alumni.cmu.edu wrote:

 Are there a lot of open JDK7
 issues that would require a release to straighten out?


I don't think so —the 2.6 release was tested pretty aggressively on JDK7,
and solely on it for Windows. Pushing out a 2.7 release would be more
symbolic than of use to people

there's one exception: the fix for java 8 security. I do also think there
may be Java 8 test failures; these need to be identified and fixed ...
which would point more to a 2.8 release for that

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-11364) [Java 8] Over usage of virtual memory

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11364:
--

 Summary: [Java 8] Over usage of virtual memory
 Key: HADOOP-11364
 URL: https://issues.apache.org/jira/browse/HADOOP-11364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
due to excessive virtual memory allocation.  Although the physical memory usage 
is low.

The most common error message is Container [pid=??,containerID=container_??] 
is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing container.

We see this problem for MR job as well as in spark driver/executor.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11365) Use Java 7's HttpCookie class to handle Secure and HttpOnly flag

2014-12-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11365:
---

 Summary: Use Java 7's HttpCookie class to handle Secure and 
HttpOnly flag
 Key: HADOOP-11365
 URL: https://issues.apache.org/jira/browse/HADOOP-11365
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu


HADOOP-10379 and HADOOP-10710 introduced support for the Secure and HttpOnly 
flags for hadoop auth cookie. The current implementation includes custom codes 
so that it can be compatible with Java 6. Since Hadoop has moved to Java 7 
these code can be replaced by the Java's HttpCookie class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11366) Fix findbug warnings after move to Java 7

2014-12-08 Thread Li Lu (JIRA)
Li Lu created HADOOP-11366:
--

 Summary: Fix findbug warnings after move to Java 7
 Key: HADOOP-11366
 URL: https://issues.apache.org/jira/browse/HADOOP-11366
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


After move to Java 7, there are 65 findbugs warnings for Hadoop common 
codebase. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Solaris Port

2014-12-08 Thread malcolm

Hi Colin,

A short summary of my changes are as follows:

- Native C source files: added 5,  modified 6, requiring also changes to 
CMakeLists.txt. Of course, all changes are ifdeffed for Solaris 
appropriately and new files, are prefixed with solaris_ as well.


For example, Solaris does not have errno, or errlist any more which are 
used quite a lot in hadoop native code. I could have replaced all calls 
to use strerror instead which would be compatible with Linux, however in 
the interests of making minimal changes, I recreated and added these 
files from a running Solaris machine instead.


Another issue is that Solaris doesn't have the timeout option for 
sockets, so I had to write my own solaris_read routine with timeout and 
added it to DomainSocket.c . A few issues with lz4 on Sparc needed 
modification, and some other OS specific issues: getgrouplist, 
container-executer (from yarn).


- Some very minor changes were made to some Java source files (mainly 
tests to get them to pass on Solaris)


The above changes were made to 2.2, I will recheck everything against 
the latest trunk, maybe some fixes aren't needed any more.


I have generated a single patch file with all changes. Perhaps it would 
be better to file multiple JIRAs for each change, perhaps grouped, one 
per issue ? Or should I file a JIRA for each modified source file ?


Thank you,
Malcolm

On 12/08/2014 09:53 PM, Colin McCabe wrote:

Hi Malcolm,

It's great that you are going to contribute!  Please make your patches
against trunk.

2.2 is fairly old at this point.  It hasn't been the focus of
development in more than a year.

We don't use github or pull requests.

Check the section on http://wiki.apache.org/hadoop/HowToContribute
that talks about Contributing your work.  Excerpt:
Finally, patches should be attached to an issue report in Jira via
the Attach File link on the issue's Jira. Please add a comment that
asks for a code review following our code review checklist. Please
note that the attachment should be granted license to ASF for
inclusion in ASF works (as per the Apache License §5).

As this says, you attach the patch file to a JIRA that you have
created, and then hit submit patch.

I don't think a branch is required for this work since it is just
build fixes, right?

best,
Colin


On Mon, Dec 8, 2014 at 3:30 AM, malcolm malcolm.kaval...@oracle.com wrote:

I have ported Hadoop  native libraries to Solaris 11 (both Sparc and Intel )
Oracle have agreed to release my changes to the community so that Solaris
platforms can benefit.
Reading the HowToContribute and GitandHadoop documents, I am not 100% clear
on how to get my changes into the main tree. I am also a Git(hub) newbie,
and was using svn previously.

Please let me know if I am going the correct path:

1. I forked Hadoop on Github and downloaded a clone to my development
machine.

2. The changes I made were to 2.2.0, can I still add changes to this branch,
and hopefully get them accepted or must I migrate my changes to 2.6 ? (On
the main Hadoop download page, 2.2 is still listed as the GA version )

3. I understand that I should create a new branch for my changes, and then
generate pull requests after uploading them to Github.

4. I also registered  at Jira in the understanding that I need to generate a
Jira number for my changes, and to name my branch accordingly ?

Does all this make sense ?

Thanks,
Malcolm






Upgrading findbugs

2014-12-08 Thread Haohui Mai
Hi,

The recent changes on moving to Java 7 triggers a bug in findbug (
http://sourceforge.net/p/findbugs/bugs/918), which causes all pre-commit
runs (e.g., HADOOP-11287) to fail.

The current version of findbugs (1.3.9) used by Hadoop is released in 2009.
Given that:

(1) The current bug that we hit are fixed by a later version of findbug.
(2) A newer findbug (3.0.0) is required to analyze Hadoop that is compiled
against Java 8.
(3) Newer findbugs are capable of catching more bugs. :-)

Is it a good time to consider upgrading findbugs, which gives us better
tools on ensuring the quality of the code case?

I ran findbugs 3.0.0 against trunk today. It reported 111 warnings for
hadoop-common, 44 for HDFS and 40+ for YARN. Many of them are possible
NPEs, resource leaks, and ignored exception which are indeed bugs and are
worthwhile to address.

However, one issue that needs to be considered is that how to deal with the
additional warnings reported by the newer findbugs without breaking the
Jenkins pre-commit runs.

Personally I can see three possible routes if we decide to upgrade findbugs:

(1) Fix all warnings before upgrading to newer findbugs.
(2) Add all new warnings to the exclude list and fix them slowly.
(3) Update test-patch.sh to make sure that new code won't introduce any new
findbugs warnings.

I proposed upgrading to findbugs 2.0.2 and fixing new warnings in
HADOOP-10476, which could be dated backed to April, 2014. I volunteer to
accelerate the effort if it is required.

Thoughts?

Regards,
Haohui

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Upgrading findbugs

2014-12-08 Thread Karthik Kambatla
Thanks for initiating this, Haohui. +1 to upgrading findbugs version.

Inline.

On Mon, Dec 8, 2014 at 9:57 PM, Haohui Mai h...@hortonworks.com wrote:

 Hi,

 The recent changes on moving to Java 7 triggers a bug in findbug (
 http://sourceforge.net/p/findbugs/bugs/918), which causes all pre-commit
 runs (e.g., HADOOP-11287) to fail.

 The current version of findbugs (1.3.9) used by Hadoop is released in 2009.
 Given that:

 (1) The current bug that we hit are fixed by a later version of findbug.
 (2) A newer findbug (3.0.0) is required to analyze Hadoop that is compiled
 against Java 8.
 (3) Newer findbugs are capable of catching more bugs. :-)

 Is it a good time to consider upgrading findbugs, which gives us better
 tools on ensuring the quality of the code case?

 I ran findbugs 3.0.0 against trunk today. It reported 111 warnings for
 hadoop-common, 44 for HDFS and 40+ for YARN. Many of them are possible
 NPEs, resource leaks, and ignored exception which are indeed bugs and are
 worthwhile to address.

 However, one issue that needs to be considered is that how to deal with the
 additional warnings reported by the newer findbugs without breaking the
 Jenkins pre-commit runs.

 Personally I can see three possible routes if we decide to upgrade
 findbugs:

 (1) Fix all warnings before upgrading to newer findbugs.


This might take a while. We might want to use the newer findbugs sooner?


 (2) Add all new warnings to the exclude list and fix them slowly.


I have my doubts on how soon we fix these warnings unless we make the
associated JIRAs (assuming we have one per exclude) blockers for the next
release. A findbugs Fix It day would be ideal to get this done.


 (3) Update test-patch.sh to make sure that new code won't introduce any new
 findbugs warnings.


Seems the best, especially if test-patch.sh shows the warnings, but doesn't
-1 unless there are new findbugs warnings. This way, the contributor can
choose to fix related warnings at the least.



 I proposed upgrading to findbugs 2.0.2 and fixing new warnings in
 HADOOP-10476, which could be dated backed to April, 2014. I volunteer to
 accelerate the effort if it is required.


 Thoughts?

 Regards,
 Haohui

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.




-- 
Karthik Kambatla
Software Engineer, Cloudera Inc.

http://five.sentenc.es


[jira] [Created] (HADOOP-11367) Fix warnings from findbugs 3.0 in hadoop-streaming

2014-12-08 Thread Li Lu (JIRA)
Li Lu created HADOOP-11367:
--

 Summary: Fix warnings from findbugs 3.0 in hadoop-streaming
 Key: HADOOP-11367
 URL: https://issues.apache.org/jira/browse/HADOOP-11367
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


When locally run findbugs 3.0, there are new warnings generated. This Jira aims 
to address the new warnings in hadoop-streaming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11368) Fix OutOfMemory caused due to leaked trustStore reloader thread in KMSClientProvider

2014-12-08 Thread Arun Suresh (JIRA)
Arun Suresh created HADOOP-11368:


 Summary: Fix OutOfMemory caused due to leaked trustStore reloader 
thread in KMSClientProvider
 Key: HADOOP-11368
 URL: https://issues.apache.org/jira/browse/HADOOP-11368
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.5.0
Reporter: Arun Suresh
Assignee: Arun Suresh


When a {{KMSClientProvider}} is initialized in _ssl_ mode, It  initializes a 
{{SSLFactory}} object. This in-turn creates an instance of 
{{ReloadingX509TrustManager}} which, on initialization, starts a trust store 
reloader thread.

It is noticed that over time, as a number of short lived {{KMSClientProvider}} 
instances are created and destroyed, the trust store manager threads are not 
interrupted/killed and remain in TIMED_WAITING state. A Thread dump shows 
multiple:
{noformat}
Truststore reloader thread daemon prio=10 tid=0x7fb1cf942800 nid=0x4e99 
waiting on condition [0x7fb0485f5000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:189)
at java.lang.Thread.run(Thread.java:662)

   Locked ownable synchronizers:
- None
{noformat}
 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)