[jira] [Created] (HADOOP-9740) FsShell's Text command does not read avro data files stored on HDFS
Allan Yan created HADOOP-9740: - Summary: FsShell's Text command does not read avro data files stored on HDFS Key: HADOOP-9740 URL: https://issues.apache.org/jira/browse/HADOOP-9740 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.5-alpha Reporter: Allan Yan HADOOP-8597 added support for reading avro data files from FsShell Text command. However, it does not work with files stored on HDFS. Here is the error message: {code} $hadoop fs -text hdfs://localhost:8020/test.avro -text: URI scheme is not "file" Usage: hadoop fs [generic options] -text [-ignoreCrc] ... {code} The problem is because the File constructor complains not able to recognize hdfs:// scheme in during AvroFileInputStream initialization. There is a unit TestTextCommand.java under hadoop-common project. However it only tested files in local file system. I created a similar one under hadoop-hdfs project using MiniDFSCluster. Please see attached maven unit test error message with full stack trace for more details. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1
[ https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8873. - Resolution: Fixed Fix Version/s: 1.3.0 Hadoop Flags: Reviewed I committed the patch to branch-1. Thank you [~ajisakaa]. > Port HADOOP-8175 (Add mkdir -p flag) to branch-1 > > > Key: HADOOP-8873 > URL: https://issues.apache.org/jira/browse/HADOOP-8873 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Eli Collins >Assignee: Akira AJISAKA > Labels: newbie > Fix For: 1.3.0 > > Attachments: HADOOP-8873-1.patch, HADOOP-8873-2.patch, > HADOOP-8873-3.patch, HADOOP-8873-branch-1-4.patch > > > Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release > to help users transition to the new shell behavior. In Hadoop 2.x mkdir > currently requires the -p option to create parent directories but a program > that specifies it won't work on 1.x since it doesn't support this option. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9739) Branch-1-Win TestNNThroughputBenchmark failed
Xi Fang created HADOOP-9739: --- Summary: Branch-1-Win TestNNThroughputBenchmark failed Key: HADOOP-9739 URL: https://issues.apache.org/jira/browse/HADOOP-9739 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Xi Fang Assignee: Xi Fang Priority: Minor Fix For: 1-win This test failed on both Windows and Linux. Here is the error information. Testcase: testNNThroughput took 36.221 sec Caused an ERROR NNThroughputBenchmark: cannot mkdir D:\condor\condor\build\test\dfs\hosts\exclude java.io.IOException: NNThroughputBenchmark: cannot mkdir D:\condor\condor\build\test\dfs\hosts\exclude at org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.(NNThroughputBenchmark.java:111) at org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168) at org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38) This test may not fail for the first run, but will fail for the second one. The root cause is in the constructor of NNThroughputBenchmark {code} NNThroughputBenchmark(Configuration conf) throws IOException, LoginException { ... config.set("dfs.hosts.exclude", "${hadoop.tmp.dir}/dfs/hosts/exclude"); File excludeFile = new File(config.get("dfs.hosts.exclude", "exclude")); if(! excludeFile.exists()) { if(!excludeFile.getParentFile().mkdirs()) throw new IOException("NNThroughputBenchmark: cannot mkdir " + excludeFile); } new FileOutputStream(excludeFile).close(); {code} excludeFile.getParentFile() may already exist, then excludeFile.getParentFile().mkdirs() will return false, which however is not an expected behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9738) TestDistCh fails
Kihwal Lee created HADOOP-9738: -- Summary: TestDistCh fails Key: HADOOP-9738 URL: https://issues.apache.org/jira/browse/HADOOP-9738 Project: Hadoop Common Issue Type: Bug Components: tools Affects Versions: 2.1.0-beta Reporter: Kihwal Lee {noformat} junit.framework.AssertionFailedError: expected: but was: at junit.framework.Assert.fail(Assert.java:50) at junit.framework.Assert.failNotEquals(Assert.java:287) at junit.framework.Assert.assertEquals(Assert.java:67) at junit.framework.Assert.assertEquals(Assert.java:74) at org.apache.hadoop.tools.TestDistCh.checkFileStatus(TestDistCh.java:197) at org.apache.hadoop.tools.TestDistCh.testDistCh(TestDistCh.java:180) {noformat} It has been broken since Jun 14. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9735) Deprecated configuration property can overwrite non-deprecated property
[ https://issues.apache.org/jira/browse/HADOOP-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao resolved HADOOP-9735. --- Resolution: Not A Problem > Deprecated configuration property can overwrite non-deprecated property > --- > > Key: HADOOP-9735 > URL: https://issues.apache.org/jira/browse/HADOOP-9735 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Jing Zhao >Assignee: Jing Zhao >Priority: Minor > Attachments: deprecated-conf.test.patch > > > For the current Configuration implementation, if a conf file contains > definitions for both a non-deprecated property and its corresponding > deprecated property (e.g., fs.defaultFS and fs.default.name), the latter will > overwrite the previous one. In the fs.defaultFS example, this may cause > client failover not work. It may be better to keep the non-deprecated > property's value unchanged. > In the meanwhile, Configuration#getPropertySources may return wrong source > information for a deprecated property. E.g., after setting fs.defaultFS, > Configuration#getPropertySources("fs.default.name") will return "because > fs.defaultFS is deprecated". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9737) JarFinder#getJar should delete the jar file upon destruction of the JVM
Esteban Gutierrez created HADOOP-9737: - Summary: JarFinder#getJar should delete the jar file upon destruction of the JVM Key: HADOOP-9737 URL: https://issues.apache.org/jira/browse/HADOOP-9737 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 2.0.0-alpha Reporter: Esteban Gutierrez Once {{JarFinder.getJar()}} is invoked by a client app, it would be really useful to destroy the generated JAR after the JVM is destroyed by setting {{tempJar.deleteOnExit()}. In order to preserve backwards compatibility a configuration setting could be implemented, e.g. {{test.build.dir.purge.on.exit}} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9736) incorrect dfs.datanode.address in cluster-setup docs
Raymond Liu created HADOOP-9736: --- Summary: incorrect dfs.datanode.address in cluster-setup docs Key: HADOOP-9736 URL: https://issues.apache.org/jira/browse/HADOOP-9736 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.0.4-alpha Reporter: Raymond Liu in the cluster setup doc, http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Configuration_in_Secure_Mode When run with secure mode, dfs.datanode.address and dfs.datanode.https.address 's port is set to :2003 and :2005. While with port large than 1024, data node won't start up. need to change the documentation to other values, say 1003, 1005? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9735) Deprecated configuration property can overwrite non-deprecated property
Jing Zhao created HADOOP-9735: - Summary: Deprecated configuration property can overwrite non-deprecated property Key: HADOOP-9735 URL: https://issues.apache.org/jira/browse/HADOOP-9735 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta Reporter: Jing Zhao Assignee: Jing Zhao Priority: Minor Attachments: deprecated-conf.test.patch For the current Configuration implementation, if a conf file contains definitions for both a non-deprecated property and its corresponding deprecated property (e.g., fs.defaultFS and fs.default.name), the latter will overwrite the previous one. In the fs.defaultFS example, this may cause client failover not work. It may be better to keep the non-deprecated property's value unchanged. In the meanwhile, Configuration#getPropertySources may return wrong source information for a deprecated property. E.g., after setting fs.defaultFS, Configuration#getPropertySources("fs.default.name") will return "because fs.defaultFS is deprecated". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira