[jira] [Created] (HADOOP-8603) Test failures with Container .. is running beyond virtual memory limits
Ilya Katsov created HADOOP-8603: --- Summary: Test failures with Container .. is running beyond virtual memory limits Key: HADOOP-8603 URL: https://issues.apache.org/jira/browse/HADOOP-8603 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 0.23.3 Environment: CentOS 6.2 Reporter: Ilya Katsov Tests org.apache.hadoop.tools.TestHadoopArchives.{testRelativePath,testPathWithSpaces} fail with the following message: {code} Container [pid=7785,containerID=container_1342495768864_0001_01_01] is running beyond virtual memory limits. Current usage: 143.6mb of 1.5gb physical memory used; 3.4gb of 3.1gb virtual memory used. Killing container. Dump of the process-tree for container_1342495768864_0001_01_01 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 7797 7785 7785 7785 (java) 573 38 3517018112 36421 /usr/java/jdk1.6.0_33/jre/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.mapreduce.container.log.dir=/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01 -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster |- 7785 7101 7785 7785 (bash) 1 1 108605440 332 /bin/bash -c /usr/java/jdk1.6.0_33/jre/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.mapreduce.container.log.dir=/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01 -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01/stdout 2/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01/stderr {code} Is it related to https://issues.apache.org/jira/browse/MAPREDUCE-3933? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Shifting to Java 7 . Is it good choice?
Hi, I have to tweak a few classes and for this I needed few packages which are only present in Java 7 like java.nio.file , So I was wondering If I can shift my development environment of Hadoop to Java 7? Would this break anything ? Thanks -- --With Regards Pavan Kulkarni
Re: Shifting to Java 7 . Is it good choice?
I have to tweak a few classes and for this I needed few packages which are only present in Java 7 like java.nio.file , So I was wondering If I can shift my development environment of Hadoop to Java 7? Would this break anything ? openjdk 7 works, but nio async file access is slower then traditional.
Re: Shifting to Java 7 . Is it good choice?
Oracle is dropping java 6 support by the end of the year. So there is likely to be a big shift to java 7 before then. Currently Hadoop officially supports java 6 so unless there is an official change of position you cannot use Java 7 specific APIs if you want to check your code into Hadoop. Hadoop currently should work on 7, like Radim said, and if you are building something on top of Hadoop it is fine, but if we are dropping support for java 6 that will require some discussion on the mailing lists. --Bobby Evans On 7/17/12 2:35 PM, Radim Kolar h...@filez.com wrote: I have to tweak a few classes and for this I needed few packages which are only present in Java 7 like java.nio.file , So I was wondering If I can shift my development environment of Hadoop to Java 7? Would this break anything ? openjdk 7 works, but nio async file access is slower then traditional.
Re: Shifting to Java 7 . Is it good choice?
That was really helpful. @Robert: No I am just working on a research project, I am not checking the code into Hadoop. Thanks Radim and Robert. On Tue, Jul 17, 2012 at 3:49 PM, Robert Evans ev...@yahoo-inc.com wrote: Oracle is dropping java 6 support by the end of the year. So there is likely to be a big shift to java 7 before then. Currently Hadoop officially supports java 6 so unless there is an official change of position you cannot use Java 7 specific APIs if you want to check your code into Hadoop. Hadoop currently should work on 7, like Radim said, and if you are building something on top of Hadoop it is fine, but if we are dropping support for java 6 that will require some discussion on the mailing lists. --Bobby Evans On 7/17/12 2:35 PM, Radim Kolar h...@filez.com wrote: I have to tweak a few classes and for this I needed few packages which are only present in Java 7 like java.nio.file , So I was wondering If I can shift my development environment of Hadoop to Java 7? Would this break anything ? openjdk 7 works, but nio async file access is slower then traditional. -- --With Regards Pavan Kulkarni
[jira] [Resolved] (HADOOP-8557) Core Test failed in jekins for patch pre-commit
[ https://issues.apache.org/jira/browse/HADOOP-8557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HADOOP-8557. - Resolution: Duplicate Resolving as dup of HADOOP-8537 Core Test failed in jekins for patch pre-commit Key: HADOOP-8557 URL: https://issues.apache.org/jira/browse/HADOOP-8557 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 2.0.0-alpha Reporter: Junping Du Priority: Blocker In jenkins PreCommit build history (https://builds.apache.org/job/PreCommit-HADOOP-Build/), following tests are failed for all recently patches (build-1164,1166,1168,1170): org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover org.apache.hadoop.ha.TestZKFailoverController.testOneOfEverything org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testOneBlock org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testOneBlockPlusOneEntry org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testThreeBlocks org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays.testOneBlock org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays.testOneBlockPlusOneEntry org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays.testThreeBlocks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8604) conf/* files overwritten at Hadoop compilation
Robert Grandl created HADOOP-8604: - Summary: conf/* files overwritten at Hadoop compilation Key: HADOOP-8604 URL: https://issues.apache.org/jira/browse/HADOOP-8604 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 1.0.3 Reporter: Robert Grandl Priority: Minor Whenever I compile hadoop from terminal as: ant compile jar run all the conf/* files are overwritten. I am not sure if some of them should not be like that but at least hadoop-env.sh, mapred-site.ml, core-site.xml, hdfs-site.xml, masters, slaves should remains. Otherwise I am forced to backup and replace content again after compilation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8605) TestReflectionUtils.testCacheDoesntLeak() can't illustrate ReflectionUtils don't generate memory leak
Yang Jiandan created HADOOP-8605: Summary: TestReflectionUtils.testCacheDoesntLeak() can't illustrate ReflectionUtils don't generate memory leak Key: HADOOP-8605 URL: https://issues.apache.org/jira/browse/HADOOP-8605 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 1.0.3 Reporter: Yang Jiandan TestReflectionUtils.testCacheDoesntLeak() uses different urlClassLoader to load TestReflectionUtils$LoadedInChild by a for cycle: int iterations=; // very fast, but a bit less reliable - bigger numbers force GC for (int i=0; iiterations; i++) { URLClassLoader loader = new URLClassLoader(new URL[0], getClass().getClassLoader()); Class cl = Class.forName(org.apache.hadoop.util.TestReflectionUtils$LoadedInChild, false, loader); Object o = ReflectionUtils.newInstance(cl, null); assertEquals(cl, o.getClass()); } but every time it generate the same class -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira