[jira] [Commented] (HDFS-3301) Add public waitOnSafeMode API with HdfsUtils
[ https://issues.apache.org/jira/browse/HDFS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257418#comment-13257418 ] Harsh J commented on HDFS-3301: --- Isn't this a dupe of HDFS-2413, which was done for HBase as well? Add public waitOnSafeMode API with HdfsUtils - Key: HDFS-3301 URL: https://issues.apache.org/jira/browse/HDFS-3301 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Add public waitOnSafeMode API as HdfsUtils. I have seen this util api in Hbase and using FSCOnstants in it. Currently that is deprecated and moved the SafeModeActions to HdfsConstants and also marked as private audience. So, it wil help adding such api in HdfsUtils itself to avoid the need of accessing HdfsConstants. from Hbase FSUtils class. {code} /** * If DFS, check safe mode and if so, wait until we clear it. * @param conf configuration * @param wait Sleep between retries * @throws IOException e */ public static void waitOnSafeMode(final Configuration conf, final long wait) throws IOException { FileSystem fs = FileSystem.get(conf); if (!(fs instanceof DistributedFileSystem)) return; DistributedFileSystem dfs = (DistributedFileSystem)fs; // Make sure dfs is not in safe mode while (dfs.setSafeMode(org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction.SAFEMODE_GET)) { LOG.info(Waiting for dfs to exit safe mode...); try { Thread.sleep(wait); } catch (InterruptedException e) { //continue } } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2740) Enable the trash feature by default
[ https://issues.apache.org/jira/browse/HDFS-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-2740: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) The decision here is to not change this default behavior. However, we can still improve the docs, for which the JIRA is now available at HDFS-3302 Enable the trash feature by default --- Key: HDFS-2740 URL: https://issues.apache.org/jira/browse/HDFS-2740 Project: Hadoop HDFS Issue Type: Wish Components: hdfs client, name-node Affects Versions: 0.23.0 Reporter: Harsh J Labels: newbie Attachments: hdfs-2740.patch, hdfs-2740.patch Currently trash is disabled out of box. I do not think it'd be of high surprise to anyone (but surely a relief when *hit happens) to have trash enabled by default, with the usually recommended periods of 1-day. Thoughts? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3301) Add public waitOnSafeMode API with HdfsUtils
[ https://issues.apache.org/jira/browse/HDFS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257420#comment-13257420 ] Uma Maheswara Rao G commented on HDFS-3301: --- Yep, we can make use of it. I forgot that issue. Thanks a lot, Harsh for noticing it. I will resolve it as duplicate. Add public waitOnSafeMode API with HdfsUtils - Key: HDFS-3301 URL: https://issues.apache.org/jira/browse/HDFS-3301 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Add public waitOnSafeMode API as HdfsUtils. I have seen this util api in Hbase and using FSCOnstants in it. Currently that is deprecated and moved the SafeModeActions to HdfsConstants and also marked as private audience. So, it wil help adding such api in HdfsUtils itself to avoid the need of accessing HdfsConstants. from Hbase FSUtils class. {code} /** * If DFS, check safe mode and if so, wait until we clear it. * @param conf configuration * @param wait Sleep between retries * @throws IOException e */ public static void waitOnSafeMode(final Configuration conf, final long wait) throws IOException { FileSystem fs = FileSystem.get(conf); if (!(fs instanceof DistributedFileSystem)) return; DistributedFileSystem dfs = (DistributedFileSystem)fs; // Make sure dfs is not in safe mode while (dfs.setSafeMode(org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction.SAFEMODE_GET)) { LOG.info(Waiting for dfs to exit safe mode...); try { Thread.sleep(wait); } catch (InterruptedException e) { //continue } } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-3301) Add public waitOnSafeMode API with HdfsUtils
[ https://issues.apache.org/jira/browse/HDFS-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G resolved HDFS-3301. --- Resolution: Duplicate Add public waitOnSafeMode API with HdfsUtils - Key: HDFS-3301 URL: https://issues.apache.org/jira/browse/HDFS-3301 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Add public waitOnSafeMode API as HdfsUtils. I have seen this util api in Hbase and using FSCOnstants in it. Currently that is deprecated and moved the SafeModeActions to HdfsConstants and also marked as private audience. So, it wil help adding such api in HdfsUtils itself to avoid the need of accessing HdfsConstants. from Hbase FSUtils class. {code} /** * If DFS, check safe mode and if so, wait until we clear it. * @param conf configuration * @param wait Sleep between retries * @throws IOException e */ public static void waitOnSafeMode(final Configuration conf, final long wait) throws IOException { FileSystem fs = FileSystem.get(conf); if (!(fs instanceof DistributedFileSystem)) return; DistributedFileSystem dfs = (DistributedFileSystem)fs; // Make sure dfs is not in safe mode while (dfs.setSafeMode(org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction.SAFEMODE_GET)) { LOG.info(Waiting for dfs to exit safe mode...); try { Thread.sleep(wait); } catch (InterruptedException e) { //continue } } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3184) Add public HDFS client API
[ https://issues.apache.org/jira/browse/HDFS-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257452#comment-13257452 ] Uma Maheswara Rao G commented on HDFS-3184: --- bq. 5) accessing the syncFS method from SequenceFile writer SequenceFile#Writer already has publiic syncFS api. I think, users can make use of it. Also filed HBASE-5830 Add public HDFS client API -- Key: HDFS-3184 URL: https://issues.apache.org/jira/browse/HDFS-3184 Project: Hadoop HDFS Issue Type: New Feature Components: hdfs client Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE There are some useful operations in HDFS but not in the FileSystem API; see a list in [Uma's comment|https://issues.apache.org/jira/browse/HDFS-1599?focusedCommentId=13243105page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13243105]. These operations should be made available to the public. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3282) Expose getFileLength API.
[ https://issues.apache.org/jira/browse/HDFS-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257456#comment-13257456 ] Hudson commented on HDFS-3282: -- Integrated in Hadoop-Hdfs-trunk #1019 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1019/]) iHDFS-3282. Expose getFileLength API. Contributed by Uma Maheswara Rao G. (Revision 1327790) HDFS-3282. Expose getFileLength API. Contributed by Uma Maheswara Rao G. (Revision 1327788) Result = FAILURE umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327790 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327788 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java Expose getFileLength API. - Key: HDFS-3282 URL: https://issues.apache.org/jira/browse/HDFS-3282 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Fix For: 3.0.0 Attachments: HDFS-3282.patch, HDFS-3282.patch This JIRA is to expose the getFileLength API through a new public DistributedFileSystemInfo class. I would appreciate if someone suggest good name for this public class. Nicholas, did you plan any special design for this public client class? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3206) Miscellaneous xml cleanups for OEV
[ https://issues.apache.org/jira/browse/HDFS-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257457#comment-13257457 ] Hudson commented on HDFS-3206: -- Integrated in Hadoop-Hdfs-trunk #1019 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1019/]) HDFS-3206. Miscellaneous xml cleanups for OEV. Contributed by Colin Patrick McCabe (Revision 1327768) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327768 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml Miscellaneous xml cleanups for OEV -- Key: HDFS-3206 URL: https://issues.apache.org/jira/browse/HDFS-3206 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.0.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3206.001.patch, HDFS-3206.002.patch, HDFS-3206.003.patch, HDFS-3206.004.patch * SetOwner operations can change both the user and group which a file or directory belongs to, or just one of those. Currently, in the XML serialization/deserialization code, we don't handle the case where just the group is set, not the user. We should handle this case. * consistently serialize generation stamp as GENSTAMP. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3292) Remove the deprecated DistributedFileSystem.DiskStatus and the related methods
[ https://issues.apache.org/jira/browse/HDFS-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257460#comment-13257460 ] Hudson commented on HDFS-3292: -- Integrated in Hadoop-Hdfs-trunk #1019 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1019/]) HDFS-3292. Remove the deprecated DiskStatus, getDiskStatus(), getRawCapacity() and getRawUsed() from DistributedFileSystem. Contributed by Arpit Gupta (Revision 1327664) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327664 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java Remove the deprecated DistributedFileSystem.DiskStatus and the related methods -- Key: HDFS-3292 URL: https://issues.apache.org/jira/browse/HDFS-3292 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Tsz Wo (Nicholas), SZE Assignee: Arpit Gupta Fix For: 3.0.0 Attachments: HDFS-3292.patch, HDFS-3292.patch DistributedFileSystem.DiskStatus and the related methods were deprecated in 0.21 by HADOOP-4368. They can be removed in 0.23, i.e. 2.0.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-891) DataNode no longer needs to check for dfs.network.script
[ https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257466#comment-13257466 ] Hudson commented on HDFS-891: - Integrated in Hadoop-Hdfs-trunk #1019 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1019/]) HDFS-891. DataNode no longer needs to check for dfs.network.script. Contributed by Harsh J (Revision 1327762) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327762 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java DataNode no longer needs to check for dfs.network.script Key: HDFS-891 URL: https://issues.apache.org/jira/browse/HDFS-891 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.23.0 Reporter: Steve Loughran Assignee: Harsh J Priority: Minor Fix For: 2.0.0 Attachments: HDFS-891.patch, hdfs-891.txt Looking at the code for {{DataNode.instantiateDataNode())} , I see that it calls {{system.exit(-1)}} if it is not happy with the configuration {code} if (conf.get(dfs.network.script) != null) { LOG.error(This configuration for rack identification is not supported + anymore. RackID resolution is handled by the NameNode.); System.exit(-1); } {code} This is excessive. It should throw an exception and let whoever called the method decide how to handle it. The {{DataNode.main()}} method will log the exception and exit with a -1 value, but other callers (such as anything using {{MiniDFSCluster}} will now see a meaningful message rather than some Junit tests exited without completing warning. Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} with this configuration set, see what happens. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3169) TestFsck should test multiple -move operations in a row
[ https://issues.apache.org/jira/browse/HDFS-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257464#comment-13257464 ] Hudson commented on HDFS-3169: -- Integrated in Hadoop-Hdfs-trunk #1019 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1019/]) HDFS-3169. TestFsck should test multiple -move operations in a row. Contributed by Colin Patrick McCabe (Revision 1327776) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327776 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java TestFsck should test multiple -move operations in a row --- Key: HDFS-3169 URL: https://issues.apache.org/jira/browse/HDFS-3169 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 2.0.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.0.0 Attachments: HDFS-3169.001.patch TestFsck should test multiple -move operations in a row. Overall, it would be nice to have more coverage on this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3263) HttpFS should read HDFS config from Hadoop site.xml files
[ https://issues.apache.org/jira/browse/HDFS-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257461#comment-13257461 ] Hudson commented on HDFS-3263: -- Integrated in Hadoop-Hdfs-trunk #1019 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1019/]) HDFS-3263. HttpFS should read HDFS config from Hadoop site.xml files (tucu) (Revision 1327627) Result = FAILURE tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327627 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/FileSystemAccess.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/FileSystemAccessException.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/hadoop/TestFileSystemAccessService.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/HadoopUsersConfTestHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt HttpFS should read HDFS config from Hadoop site.xml files - Key: HDFS-3263 URL: https://issues.apache.org/jira/browse/HDFS-3263 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.0.0 Attachments: HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch Currently HttpFS reads HDFS client configuration from the httfs-site.xml from any property of the form 'httpfs.hadoop.conf:HADOOP_PROPERTY' This is a bit inconvenient. Instead we should support a single property 'httpfs.hadoop.configuration.dir' that can be pointed to HADOOP conf/ dir and the core-site.xml and hdfs-site.xml files would be read from there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3282) Expose getFileLength API.
[ https://issues.apache.org/jira/browse/HDFS-3282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257487#comment-13257487 ] Hudson commented on HDFS-3282: -- Integrated in Hadoop-Mapreduce-trunk #1054 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1054/]) iHDFS-3282. Expose getFileLength API. Contributed by Uma Maheswara Rao G. (Revision 1327790) HDFS-3282. Expose getFileLength API. Contributed by Uma Maheswara Rao G. (Revision 1327788) Result = SUCCESS umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327790 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java umamahesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327788 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java Expose getFileLength API. - Key: HDFS-3282 URL: https://issues.apache.org/jira/browse/HDFS-3282 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Fix For: 3.0.0 Attachments: HDFS-3282.patch, HDFS-3282.patch This JIRA is to expose the getFileLength API through a new public DistributedFileSystemInfo class. I would appreciate if someone suggest good name for this public class. Nicholas, did you plan any special design for this public client class? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3206) Miscellaneous xml cleanups for OEV
[ https://issues.apache.org/jira/browse/HDFS-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257489#comment-13257489 ] Hudson commented on HDFS-3206: -- Integrated in Hadoop-Mapreduce-trunk #1054 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1054/]) HDFS-3206. Miscellaneous xml cleanups for OEV. Contributed by Colin Patrick McCabe (Revision 1327768) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327768 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml Miscellaneous xml cleanups for OEV -- Key: HDFS-3206 URL: https://issues.apache.org/jira/browse/HDFS-3206 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.0.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3206.001.patch, HDFS-3206.002.patch, HDFS-3206.003.patch, HDFS-3206.004.patch * SetOwner operations can change both the user and group which a file or directory belongs to, or just one of those. Currently, in the XML serialization/deserialization code, we don't handle the case where just the group is set, not the user. We should handle this case. * consistently serialize generation stamp as GENSTAMP. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3292) Remove the deprecated DistributedFileSystem.DiskStatus and the related methods
[ https://issues.apache.org/jira/browse/HDFS-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257491#comment-13257491 ] Hudson commented on HDFS-3292: -- Integrated in Hadoop-Mapreduce-trunk #1054 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1054/]) HDFS-3292. Remove the deprecated DiskStatus, getDiskStatus(), getRawCapacity() and getRawUsed() from DistributedFileSystem. Contributed by Arpit Gupta (Revision 1327664) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327664 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java Remove the deprecated DistributedFileSystem.DiskStatus and the related methods -- Key: HDFS-3292 URL: https://issues.apache.org/jira/browse/HDFS-3292 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Affects Versions: 3.0.0 Reporter: Tsz Wo (Nicholas), SZE Assignee: Arpit Gupta Fix For: 3.0.0 Attachments: HDFS-3292.patch, HDFS-3292.patch DistributedFileSystem.DiskStatus and the related methods were deprecated in 0.21 by HADOOP-4368. They can be removed in 0.23, i.e. 2.0.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3169) TestFsck should test multiple -move operations in a row
[ https://issues.apache.org/jira/browse/HDFS-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257493#comment-13257493 ] Hudson commented on HDFS-3169: -- Integrated in Hadoop-Mapreduce-trunk #1054 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1054/]) HDFS-3169. TestFsck should test multiple -move operations in a row. Contributed by Colin Patrick McCabe (Revision 1327776) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327776 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java TestFsck should test multiple -move operations in a row --- Key: HDFS-3169 URL: https://issues.apache.org/jira/browse/HDFS-3169 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 2.0.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.0.0 Attachments: HDFS-3169.001.patch TestFsck should test multiple -move operations in a row. Overall, it would be nice to have more coverage on this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3263) HttpFS should read HDFS config from Hadoop site.xml files
[ https://issues.apache.org/jira/browse/HDFS-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257494#comment-13257494 ] Hudson commented on HDFS-3263: -- Integrated in Hadoop-Mapreduce-trunk #1054 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1054/]) HDFS-3263. HttpFS should read HDFS config from Hadoop site.xml files (tucu) (Revision 1327627) Result = SUCCESS tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327627 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/FileSystemAccess.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/FileSystemAccessException.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/apt/ServerSetup.apt.vm * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/hadoop/TestFileSystemAccessService.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/HadoopUsersConfTestHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt HttpFS should read HDFS config from Hadoop site.xml files - Key: HDFS-3263 URL: https://issues.apache.org/jira/browse/HDFS-3263 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.0.0 Attachments: HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch, HDFS-3263.patch Currently HttpFS reads HDFS client configuration from the httfs-site.xml from any property of the form 'httpfs.hadoop.conf:HADOOP_PROPERTY' This is a bit inconvenient. Instead we should support a single property 'httpfs.hadoop.configuration.dir' that can be pointed to HADOOP conf/ dir and the core-site.xml and hdfs-site.xml files would be read from there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-891) DataNode no longer needs to check for dfs.network.script
[ https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257496#comment-13257496 ] Hudson commented on HDFS-891: - Integrated in Hadoop-Mapreduce-trunk #1054 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1054/]) HDFS-891. DataNode no longer needs to check for dfs.network.script. Contributed by Harsh J (Revision 1327762) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327762 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java DataNode no longer needs to check for dfs.network.script Key: HDFS-891 URL: https://issues.apache.org/jira/browse/HDFS-891 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.23.0 Reporter: Steve Loughran Assignee: Harsh J Priority: Minor Fix For: 2.0.0 Attachments: HDFS-891.patch, hdfs-891.txt Looking at the code for {{DataNode.instantiateDataNode())} , I see that it calls {{system.exit(-1)}} if it is not happy with the configuration {code} if (conf.get(dfs.network.script) != null) { LOG.error(This configuration for rack identification is not supported + anymore. RackID resolution is handled by the NameNode.); System.exit(-1); } {code} This is excessive. It should throw an exception and let whoever called the method decide how to handle it. The {{DataNode.main()}} method will log the exception and exit with a -1 value, but other callers (such as anything using {{MiniDFSCluster}} will now see a meaningful message rather than some Junit tests exited without completing warning. Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} with this configuration set, see what happens. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2994) If lease is recovered successfully inline with create, create can fail
[ https://issues.apache.org/jira/browse/HDFS-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257533#comment-13257533 ] Brahma Reddy Battula commented on HDFS-2994: Both scenario's after append,fail write(by renaming or restating DN) and then client should be closed.. like following..then append again. {code} DistributedFileSystem dfs = initHDFS(); try { writeFile(dfs, hdfsFile, out, 1,true); out=appendFile(dfs,hdfsFile); } writeFile(dfs, hdfsFile, out, 1,true); catch (Exception e) { // TODO: handle exception e.printStackTrace(); } finally { if (dfs != null) { dfs.close(); } } {code} If lease is recovered successfully inline with create, create can fail -- Key: HDFS-2994 URL: https://issues.apache.org/jira/browse/HDFS-2994 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.24.0 Reporter: Todd Lipcon I saw the following logs on my test cluster: {code} 2012-02-22 14:35:22,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: startFile: recover lease [Lease. Holder: DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1, pendingcreates: 1], src=/benchmarks/TestDFSIO/io_data/test_io_6 from client DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1 2012-02-22 14:35:22,887 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. Holder: DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1, pendingcreates: 1], src=/benchmarks/TestDFSIO/io_data/test_io_6 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All existing blocks are COMPLETE, lease removed, file closed. 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: DIR* FSDirectory.replaceNode: failed to remove /benchmarks/TestDFSIO/io_data/test_io_6 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.startFile: FSDirectory.replaceNode: failed to remove /benchmarks/TestDFSIO/io_data/test_io_6 {code} It seems like, if {{recoverLeaseInternal}} succeeds in {{startFileInternal}}, then the INode will be replaced with a new one, meaning the later {{replaceNode}} call can fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3303) RemoteEditLogManifest doesn't need to implements Writable
RemoteEditLogManifest doesn't need to implements Writable - Key: HDFS-3303 URL: https://issues.apache.org/jira/browse/HDFS-3303 Project: Hadoop HDFS Issue Type: Bug Reporter: Brandon Li Assignee: Brandon Li Priority: Minor Since we are using protocol buffers, RemoteEditLogManifest doesn't need to implements Writable. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3305) GetImageServlet should considered SBN a valid requestor in a secure HA setup
GetImageServlet should considered SBN a valid requestor in a secure HA setup Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3304) fix fuse_dfs build
fix fuse_dfs build -- Key: HDFS-3304 URL: https://issues.apache.org/jira/browse/HDFS-3304 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0 Reporter: Colin Patrick McCabe Priority: Minor The fuse_dfs build is broken in several ways. If you run: {code} mvn compile -DskipTests -Pnative mvn compile -DskipTests -Pfuse {code} You get the following error message: {code} [exec] /usr/lib64/gcc/x86_64-suse-linux/4.6/../../../../x86_64-suse-linux/bin/ld: cannot find -lhdfs [exec] collect2: ld returned 1 exit status [exec] make[1]: *** [fuse_dfs] Error 1 [exec] make: *** [all-recursive] Error 1 {code} libhdfs.so was created, but the -Pfuse build doesn't know where it is and can't link against it. Also, should ''mvn install -Pfuse'' be copying fuse_dfs somewhere? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3305) GetImageServlet should considered SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-3305: - Attachment: HDFS-3305.patch Here's a patch which addresses the issue. In GetImageServlet#isValidRequestor, we now check to see if HA is configured for this nameservice and add the principal of the other NN as a valid requestor if HA is configured. In addition to the test provided, I also tested this manually with a pair of secure, HA-enabled NNs. GetImageServlet should considered SBN a valid requestor in a secure HA setup Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3305) GetImageServlet should considered SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-3305: - Status: Patch Available (was: Open) GetImageServlet should considered SBN a valid requestor in a secure HA setup Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3305) GetImageServlet should considered SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257708#comment-13257708 ] Todd Lipcon commented on HDFS-3305: --- +1 pending jenkins GetImageServlet should considered SBN a valid requestor in a secure HA setup Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3305) GetImageServlet should considered SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257785#comment-13257785 ] Hadoop QA commented on HDFS-3305: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523385/HDFS-3305.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestDatanodeBlockScanner +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2304//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2304//console This message is automatically generated. GetImageServlet should considered SBN a valid requestor in a secure HA setup Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3305) GetImageServlet should considered SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257789#comment-13257789 ] Aaron T. Myers commented on HDFS-3305: -- I'm confident that the test failure is unrelated. That test doesn't touch this code path, and I just ran TestDatanodeBlockScanner locally on my box and confirmed that it passed just fine. Thanks a lot for the review, Todd. I'll commit this momentarily. GetImageServlet should considered SBN a valid requestor in a secure HA setup Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-3305: - Summary: GetImageServlet should consider SBN a valid requestor in a secure HA setup (was: GetImageServlet should considered SBN a valid requestor in a secure HA setup) Fixing typo in issue summary. GetImageServlet should consider SBN a valid requestor in a secure HA setup -- Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3300) stream the edit segments to NameNode when NameNode starts up
[ https://issues.apache.org/jira/browse/HDFS-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257801#comment-13257801 ] Brandon Li commented on HDFS-3300: -- {quote}What is the plan of action when there are errors in streaming? NN will shutdown with error?{quote} If there are errors in streamed data, the NN can reload the namespace with a different active journal service. If NN gets errors with all active journal services, it can shutdown. {quote}Will the streamed edits also be dumped to disk in parallel? So that next time around they are available locally instead of having to stream them again.{quote} Caching a copy of the edit segment can be helpful when NN doesn't have new checkpoint before its next reboot. Otherwise the cached edit segments will not be used for the next NN startup. stream the edit segments to NameNode when NameNode starts up Key: HDFS-3300 URL: https://issues.apache.org/jira/browse/HDFS-3300 Project: Hadoop HDFS Issue Type: Sub-task Components: ha, name-node Reporter: Brandon Li Edit logs are saved on Journal daemon. When NameNode starts, it loads the latest image file and then streams the edit logs from an active Journal daemon. Currently we are using http to transfer edit files between two journal daemons/nodes or between a journal daemon and a NameNode. To get edit file from Journal daemon, the NameNode has to download it first and then read it from the disk. To avoid the slow start-up time, NameNode should be enhance to read the http data stream and update its in memory name space instead of saving the streamed data on disk first. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-3305: - Resolution: Fixed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) I've just committed this to trunk and branch-2. GetImageServlet should consider SBN a valid requestor in a secure HA setup -- Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Fix For: 2.0.0 Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257809#comment-13257809 ] Hudson commented on HDFS-3305: -- Integrated in Hadoop-Common-trunk-Commit #2112 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2112/]) HDFS-3305. GetImageServlet should consider SBN a valid requestor in a secure HA setup. Contributed by Aaron T. Myers. (Revision 1328115) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1328115 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java GetImageServlet should consider SBN a valid requestor in a secure HA setup -- Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Fix For: 2.0.0 Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257811#comment-13257811 ] Hudson commented on HDFS-3305: -- Integrated in Hadoop-Hdfs-trunk-Commit #2186 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2186/]) HDFS-3305. GetImageServlet should consider SBN a valid requestor in a secure HA setup. Contributed by Aaron T. Myers. (Revision 1328115) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1328115 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java GetImageServlet should consider SBN a valid requestor in a secure HA setup -- Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Fix For: 2.0.0 Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup
[ https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257814#comment-13257814 ] Hudson commented on HDFS-3305: -- Integrated in Hadoop-Mapreduce-trunk-Commit #2128 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2128/]) HDFS-3305. GetImageServlet should consider SBN a valid requestor in a secure HA setup. Contributed by Aaron T. Myers. (Revision 1328115) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1328115 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java GetImageServlet should consider SBN a valid requestor in a secure HA setup -- Key: HDFS-3305 URL: https://issues.apache.org/jira/browse/HDFS-3305 Project: Hadoop HDFS Issue Type: Bug Components: ha, name-node Affects Versions: 2.0.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Fix For: 2.0.0 Attachments: HDFS-3305.patch Right now only the NN and 2NN are considered valid requestors. This won't work if the ANN and SBN use distinct principal names. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2492) BlockManager cross-rack replication checks only work for ScriptBasedMapping
[ https://issues.apache.org/jira/browse/HDFS-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-2492: - Status: Open (was: Patch Available) BlockManager cross-rack replication checks only work for ScriptBasedMapping --- Key: HDFS-2492 URL: https://issues.apache.org/jira/browse/HDFS-2492 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.23.0, 0.24.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 0.24.0, 0.23.3 Attachments: HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch The BlockManager cross-rack replication checks only works if script files are used for replication, not if alternate plugins provide the topology information. This is because the BlockManager sets its rack checking flag if there is a filename key {code} shouldCheckForEnoughRacks = conf.get(DFSConfigKeys.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY) != null; {code} yet this filename key is only used if the topology mapper defined by {code} DFSConfigKeys.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY {code} is an instance of {{ScriptBasedMapping}} If any other mapper is used, the system may be multi rack, but the Block Manager will not be aware of this fact unless the filename key is set to something non-null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2492) BlockManager cross-rack replication checks only work for ScriptBasedMapping
[ https://issues.apache.org/jira/browse/HDFS-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-2492: - Attachment: HDFS-2492.patch in sync w/ trunk BlockManager cross-rack replication checks only work for ScriptBasedMapping --- Key: HDFS-2492 URL: https://issues.apache.org/jira/browse/HDFS-2492 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.0.1, 1.0.2, 2.0.0, 3.0.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 2.0.0, 3.0.0 Attachments: HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492.patch The BlockManager cross-rack replication checks only works if script files are used for replication, not if alternate plugins provide the topology information. This is because the BlockManager sets its rack checking flag if there is a filename key {code} shouldCheckForEnoughRacks = conf.get(DFSConfigKeys.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY) != null; {code} yet this filename key is only used if the topology mapper defined by {code} DFSConfigKeys.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY {code} is an instance of {{ScriptBasedMapping}} If any other mapper is used, the system may be multi rack, but the Block Manager will not be aware of this fact unless the filename key is set to something non-null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2492) BlockManager cross-rack replication checks only work for ScriptBasedMapping
[ https://issues.apache.org/jira/browse/HDFS-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-2492: - Fix Version/s: (was: 0.23.3) (was: 0.24.0) 3.0.0 2.0.0 Affects Version/s: (was: 0.24.0) (was: 0.23.0) 3.0.0 2.0.0 1.0.1 1.0.2 Status: Patch Available (was: Open) BlockManager cross-rack replication checks only work for ScriptBasedMapping --- Key: HDFS-2492 URL: https://issues.apache.org/jira/browse/HDFS-2492 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.0.2, 1.0.1, 2.0.0, 3.0.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 2.0.0, 3.0.0 Attachments: HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492.patch The BlockManager cross-rack replication checks only works if script files are used for replication, not if alternate plugins provide the topology information. This is because the BlockManager sets its rack checking flag if there is a filename key {code} shouldCheckForEnoughRacks = conf.get(DFSConfigKeys.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY) != null; {code} yet this filename key is only used if the topology mapper defined by {code} DFSConfigKeys.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY {code} is an instance of {{ScriptBasedMapping}} If any other mapper is used, the system may be multi rack, but the Block Manager will not be aware of this fact unless the filename key is set to something non-null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2492) BlockManager cross-rack replication checks only work for ScriptBasedMapping
[ https://issues.apache.org/jira/browse/HDFS-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257913#comment-13257913 ] Hadoop QA commented on HDFS-2492: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523415/HDFS-2492.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2305//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2305//console This message is automatically generated. BlockManager cross-rack replication checks only work for ScriptBasedMapping --- Key: HDFS-2492 URL: https://issues.apache.org/jira/browse/HDFS-2492 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.0.1, 1.0.2, 2.0.0, 3.0.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Fix For: 2.0.0, 3.0.0 Attachments: HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492-blockmanager.patch, HDFS-2492.patch The BlockManager cross-rack replication checks only works if script files are used for replication, not if alternate plugins provide the topology information. This is because the BlockManager sets its rack checking flag if there is a filename key {code} shouldCheckForEnoughRacks = conf.get(DFSConfigKeys.NET_TOPOLOGY_SCRIPT_FILE_NAME_KEY) != null; {code} yet this filename key is only used if the topology mapper defined by {code} DFSConfigKeys.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY {code} is an instance of {{ScriptBasedMapping}} If any other mapper is used, the system may be multi rack, but the Block Manager will not be aware of this fact unless the filename key is set to something non-null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3270) run valgrind on fuse-dfs, fix any memory leaks
[ https://issues.apache.org/jira/browse/HDFS-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-3270: --- Attachment: HDFS-3270.001.patch * fix malloc check in hdfs.c (not actually a memory leak,b ut still bogus) run valgrind on fuse-dfs, fix any memory leaks -- Key: HDFS-3270 URL: https://issues.apache.org/jira/browse/HDFS-3270 Project: Hadoop HDFS Issue Type: Improvement Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3270.001.patch run valgrind on fuse-dfs, fix any memory leaks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3270) run valgrind on fuse-dfs, fix any memory leaks
[ https://issues.apache.org/jira/browse/HDFS-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-3270: --- Status: Patch Available (was: Open) run valgrind on fuse-dfs, fix any memory leaks -- Key: HDFS-3270 URL: https://issues.apache.org/jira/browse/HDFS-3270 Project: Hadoop HDFS Issue Type: Improvement Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3270.001.patch run valgrind on fuse-dfs, fix any memory leaks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3270) run valgrind on fuse-dfs, fix any memory leaks
[ https://issues.apache.org/jira/browse/HDFS-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-3270: --- Attachment: HDFS-3270.002.patch * fix another case of the same mistake run valgrind on fuse-dfs, fix any memory leaks -- Key: HDFS-3270 URL: https://issues.apache.org/jira/browse/HDFS-3270 Project: Hadoop HDFS Issue Type: Improvement Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3270.001.patch, HDFS-3270.002.patch run valgrind on fuse-dfs, fix any memory leaks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3306) fuse_dfs: don't lock release operations
fuse_dfs: don't lock release operations --- Key: HDFS-3306 URL: https://issues.apache.org/jira/browse/HDFS-3306 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor There's no need to lock release operations in FUSE, because release can only be called once on a fuse_file_info structure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3306) fuse_dfs: don't lock release operations
[ https://issues.apache.org/jira/browse/HDFS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-3306: --- Attachment: HDFS-3306.001.patch fuse_dfs: don't lock release operations --- Key: HDFS-3306 URL: https://issues.apache.org/jira/browse/HDFS-3306 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3306.001.patch There's no need to lock release operations in FUSE, because release can only be called once on a fuse_file_info structure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3306) fuse_dfs: don't lock release operations
[ https://issues.apache.org/jira/browse/HDFS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-3306: --- Status: Patch Available (was: Open) fuse_dfs: don't lock release operations --- Key: HDFS-3306 URL: https://issues.apache.org/jira/browse/HDFS-3306 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3306.001.patch There's no need to lock release operations in FUSE, because release can only be called once on a fuse_file_info structure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3270) run valgrind on fuse-dfs, fix any memory leaks
[ https://issues.apache.org/jira/browse/HDFS-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257959#comment-13257959 ] Hadoop QA commented on HDFS-3270: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523431/HDFS-3270.001.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2306//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2306//console This message is automatically generated. run valgrind on fuse-dfs, fix any memory leaks -- Key: HDFS-3270 URL: https://issues.apache.org/jira/browse/HDFS-3270 Project: Hadoop HDFS Issue Type: Improvement Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3270.001.patch, HDFS-3270.002.patch run valgrind on fuse-dfs, fix any memory leaks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-3307) when a file
when a file --- Key: HDFS-3307 URL: https://issues.apache.org/jira/browse/HDFS-3307 Project: Hadoop HDFS Issue Type: Bug Reporter: yixiaohua -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2885) Remove federation from the nameservice config options
[ https://issues.apache.org/jira/browse/HDFS-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257966#comment-13257966 ] Sanjay Radia commented on HDFS-2885: Eli can please post the before-after config files for HA'ed NN and a before-after config files for federation under this proposal. Remove federation from the nameservice config options --- Key: HDFS-2885 URL: https://issues.apache.org/jira/browse/HDFS-2885 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 0.23.1 Reporter: Eli Collins HDFS-1623, and potentially other HDFS features will use the nameservice abstraction, even if federation is not enabled (eg you need to configure {{dfs.federation.nameservices}} in HA even if you're not using federation just to declare your nameservice). This is confusing to users. We should consider deprecating and removing federation from the {{dfs.federation.nameservices}} and {{dfs.federation.nameservice.id}} config options, as {{dfs.nameservices}} and {{dfs.nameservice.id}} are more intuitive. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3306) fuse_dfs: don't lock release operations
[ https://issues.apache.org/jira/browse/HDFS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257975#comment-13257975 ] Hadoop QA commented on HDFS-3306: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523439/HDFS-3306.001.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2307//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2307//console This message is automatically generated. fuse_dfs: don't lock release operations --- Key: HDFS-3306 URL: https://issues.apache.org/jira/browse/HDFS-3306 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3306.001.patch There's no need to lock release operations in FUSE, because release can only be called once on a fuse_file_info structure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-3307) when a file
[ https://issues.apache.org/jira/browse/HDFS-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HDFS-3307. --- Resolution: Invalid when a file --- Key: HDFS-3307 URL: https://issues.apache.org/jira/browse/HDFS-3307 Project: Hadoop HDFS Issue Type: Bug Reporter: yixiaohua -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3270) run valgrind on fuse-dfs, fix any memory leaks
[ https://issues.apache.org/jira/browse/HDFS-3270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257977#comment-13257977 ] Hadoop QA commented on HDFS-3270: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12523434/HDFS-3270.002.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2308//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2308//console This message is automatically generated. run valgrind on fuse-dfs, fix any memory leaks -- Key: HDFS-3270 URL: https://issues.apache.org/jira/browse/HDFS-3270 Project: Hadoop HDFS Issue Type: Improvement Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3270.001.patch, HDFS-3270.002.patch run valgrind on fuse-dfs, fix any memory leaks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-3306) fuse_dfs: don't lock release operations
[ https://issues.apache.org/jira/browse/HDFS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258009#comment-13258009 ] Eli Collins commented on HDFS-3306: --- Agree with your logic, and the change looks good. Testing? fuse_dfs: don't lock release operations --- Key: HDFS-3306 URL: https://issues.apache.org/jira/browse/HDFS-3306 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HDFS-3306.001.patch There's no need to lock release operations in FUSE, because release can only be called once on a fuse_file_info structure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-3304) fix fuse_dfs build
[ https://issues.apache.org/jira/browse/HDFS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-3304: -- Target Version/s: 2.0.0 (was: 1.0.0) Affects Version/s: (was: 0.23.0) 2.0.0 Per HDFS-2696 the following command ({{mvn clean install -DskipTests -Pnative -DskipTest -Pfuse}}) works for me. Does it work for you? If so looks like the compile target is broken, if not suspect it's a host-specific issue. We should update BUILDING.txt to this effect. Also, might be worth going straight to HDFS-3251 rather than fixing this. fix fuse_dfs build -- Key: HDFS-3304 URL: https://issues.apache.org/jira/browse/HDFS-3304 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.0.0 Reporter: Colin Patrick McCabe Priority: Minor The fuse_dfs build is broken in several ways. If you run: {code} mvn compile -DskipTests -Pnative mvn compile -DskipTests -Pfuse {code} You get the following error message: {code} [exec] /usr/lib64/gcc/x86_64-suse-linux/4.6/../../../../x86_64-suse-linux/bin/ld: cannot find -lhdfs [exec] collect2: ld returned 1 exit status [exec] make[1]: *** [fuse_dfs] Error 1 [exec] make: *** [all-recursive] Error 1 {code} libhdfs.so was created, but the -Pfuse build doesn't know where it is and can't link against it. Also, should ''mvn install -Pfuse'' be copying fuse_dfs somewhere? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira