[jira] [Commented] (HDFS-2762) TestCheckpoint is timing out
[ https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181902#comment-13181902 ] Uma Maheswara Rao G commented on HDFS-2762: --- Thanks Aaron for filing the JIRA. Todd, i just looked at testMultipleSecondaryNameNodes. Here the problem is Namenode formatted normally with different cluster IDs for two federated namenodes. Here actually i should have same cluster ID for this two namenodes as they are belongs to the same cluster. I will just fix this and upload patch tonight. Thanks Uma > TestCheckpoint is timing out > > > Key: HDFS-2762 > URL: https://issues.apache.org/jira/browse/HDFS-2762 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Aaron T. Myers >Assignee: Uma Maheswara Rao G > > TestCheckpoint is timing out on the HA branch, and has been for a few days. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-851) NPE in FSDir.getBlockInfo
[ https://issues.apache.org/jira/browse/HDFS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J resolved HDFS-851. -- Resolution: Duplicate This was fixed by Bharat's recent work on moving all calls to the FileUtil.listFiles wrapper, that throws an IOE appropriately instead. > NPE in FSDir.getBlockInfo > - > > Key: HDFS-851 > URL: https://issues.apache.org/jira/browse/HDFS-851 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.20.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: hadoop-4128.patch > > > This could well be something I've introduced on my variant of the code, > although its a recent arrival in my own tests: an NPE while bringing up a > datanode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace
[ https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181928#comment-13181928 ] Hudson commented on HDFS-2726: -- Integrated in Hadoop-Hdfs-0.23-Build #131 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/131/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set
[ https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181925#comment-13181925 ] Hudson commented on HDFS-2729: -- Integrated in Hadoop-Hdfs-0.23-Build #131 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/131/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value
[ https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181930#comment-13181930 ] Hudson commented on HDFS-1314: -- Integrated in Hadoop-Hdfs-0.23-Build #131 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/131/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer
[ https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181929#comment-13181929 ] Hudson commented on HDFS-2349: -- Integrated in Hadoop-Hdfs-0.23-Build #131 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/131/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
[ https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181931#comment-13181931 ] Hudson commented on HDFS-554: - Integrated in Hadoop-Hdfs-0.23-Build #131 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/131/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/bl
[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set
[ https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181933#comment-13181933 ] Hudson commented on HDFS-2729: -- Integrated in Hadoop-Hdfs-trunk #918 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/918/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update BlockManager's comments regarding the invalid block set > -- > > Key: HDFS-2729 > URL: https://issues.apache.org/jira/browse/HDFS-2729 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Harsh J >Assignee: Harsh J >Priority: Minor > Fix For: 0.23.1 > > Attachments: HDFS-2729.patch > > > Looks like after HDFS-82 was covered at some point, the comments and logs > still carry presence of two sets when there really is just one set. > This patch changes the logs and comments to be more accurate about that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace
[ https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181936#comment-13181936 ] Hudson commented on HDFS-2726: -- Integrated in Hadoop-Hdfs-trunk #918 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/918/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > "Exception in createBlockOutputStream" shouldn't delete exception stack trace > - > > Key: HDFS-2726 > URL: https://issues.apache.org/jira/browse/HDFS-2726 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 0.23.0 >Reporter: Michael Bieniosek >Assignee: Harsh J > Fix For: 0.23.1 > > Attachments: HDFS-2726.patch > > > I'm occasionally (1/5000 times) getting this error after upgrading everything > to hadoop-0.18: > 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream > java.io.IOException: Could not read from stream > 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block > blk_624229997631234952_8205908 > DFSClient contains the logging code: > LOG.info("Exception in createBlockOutputStream " + ie); > This would be better written with ie as the second argument to LOG.info, so > that the stack trace could be preserved. As it is, I don't know how to start > debugging. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value
[ https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181938#comment-13181938 ] Hudson commented on HDFS-1314: -- Integrated in Hadoop-Hdfs-trunk #918 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/918/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > dfs.blocksize accepts only absolute value > - > > Key: HDFS-1314 > URL: https://issues.apache.org/jira/browse/HDFS-1314 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0 >Reporter: Karim Saadah >Assignee: Sho Shimauchi >Priority: Minor > Labels: newbie > Fix For: 0.23.1 > > Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, > hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt > > > Using "dfs.block.size=8388608" works > but "dfs.block.size=8mb" does not. > Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException. > (http://pastebin.corp.yahoo.com/56129) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer
[ https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181937#comment-13181937 ] Hudson commented on HDFS-2349: -- Integrated in Hadoop-Hdfs-trunk #918 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/918/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > DN should log a WARN, not an INFO when it detects a corruption during block > transfer > > > Key: HDFS-2349 > URL: https://issues.apache.org/jira/browse/HDFS-2349 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.20.204.0 >Reporter: Harsh J >Assignee: Harsh J >Priority: Trivial > Fix For: 0.23.1 > > Attachments: HDFS-2349.diff > > > Currently, in DataNode.java, we have: > {code} > LOG.info("Can't replicate block " + block > + " because on-disk length " + onDiskLength > + " is shorter than NameNode recorded length " + > block.getNumBytes()); > {code} > This log is better off as a WARN as it indicates (and also reports) a > corruption. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
[ https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181939#comment-13181939 ] Hudson commented on HDFS-554: - Integrated in Hadoop-Hdfs-trunk #918 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/918/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > BlockInfo.ensureCapacity may get a speedup from System.arraycopy() > -- > > Key: HDFS-554 > URL: https://issues.apache.org/jira/browse/HDFS-554 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Harsh J >Priority: Minor > Fix For: 0.23.1 > > Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt > > > BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into > the expanded array. {{System.arraycopy()}} is generally much faster for > this, as it can do a bulk memory copy. There is also the typesafe Java6 > {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2762) TestCheckpoint is timing out
[ https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181941#comment-13181941 ] Uma Maheswara Rao G commented on HDFS-2762: --- Here is the fix for testMultipleSecondaryNameNodes. > TestCheckpoint is timing out > > > Key: HDFS-2762 > URL: https://issues.apache.org/jira/browse/HDFS-2762 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Aaron T. Myers >Assignee: Uma Maheswara Rao G > Attachments: HDFS-2762.patch > > > TestCheckpoint is timing out on the HA branch, and has been for a few days. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2762) TestCheckpoint is timing out
[ https://issues.apache.org/jira/browse/HDFS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-2762: -- Attachment: HDFS-2762.patch > TestCheckpoint is timing out > > > Key: HDFS-2762 > URL: https://issues.apache.org/jira/browse/HDFS-2762 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Aaron T. Myers >Assignee: Uma Maheswara Rao G > Attachments: HDFS-2762.patch > > > TestCheckpoint is timing out on the HA branch, and has been for a few days. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace
[ https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181947#comment-13181947 ] Hudson commented on HDFS-2726: -- Integrated in Hadoop-Mapreduce-0.23-Build #153 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/153/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/s
[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set
[ https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181944#comment-13181944 ] Hudson commented on HDFS-2729: -- Integrated in Hadoop-Mapreduce-0.23-Build #153 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/153/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/s
[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
[ https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181950#comment-13181950 ] Hudson commented on HDFS-554: - Integrated in Hadoop-Mapreduce-0.23-Build #153 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/153/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/
[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer
[ https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181948#comment-13181948 ] Hudson commented on HDFS-2349: -- Integrated in Hadoop-Mapreduce-0.23-Build #153 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/153/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/s
[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value
[ https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181949#comment-13181949 ] Hudson commented on HDFS-1314: -- Integrated in Hadoop-Mapreduce-0.23-Build #153 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/153/]) merge HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228562 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/s
[jira] [Commented] (HDFS-2726) "Exception in createBlockOutputStream" shouldn't delete exception stack trace
[ https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181956#comment-13181956 ] Hudson commented on HDFS-2726: -- Integrated in Hadoop-Mapreduce-trunk #951 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/951/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > "Exception in createBlockOutputStream" shouldn't delete exception stack trace > - > > Key: HDFS-2726 > URL: https://issues.apache.org/jira/browse/HDFS-2726 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 0.23.0 >Reporter: Michael Bieniosek >Assignee: Harsh J > Fix For: 0.23.1 > > Attachments: HDFS-2726.patch > > > I'm occasionally (1/5000 times) getting this error after upgrading everything > to hadoop-0.18: > 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream > java.io.IOException: Could not read from stream > 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block > blk_624229997631234952_8205908 > DFSClient contains the logging code: > LOG.info("Exception in createBlockOutputStream " + ie); > This would be better written with ie as the second argument to LOG.info, so > that the stack trace could be preserved. As it is, I don't know how to start > debugging. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
[ https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181959#comment-13181959 ] Hudson commented on HDFS-554: - Integrated in Hadoop-Mapreduce-trunk #951 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/951/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > BlockInfo.ensureCapacity may get a speedup from System.arraycopy() > -- > > Key: HDFS-554 > URL: https://issues.apache.org/jira/browse/HDFS-554 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Harsh J >Priority: Minor > Fix For: 0.23.1 > > Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt > > > BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into > the expanded array. {{System.arraycopy()}} is generally much faster for > this, as it can do a bulk memory copy. There is also the typesafe Java6 > {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2729) Update BlockManager's comments regarding the invalid block set
[ https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181953#comment-13181953 ] Hudson commented on HDFS-2729: -- Integrated in Hadoop-Mapreduce-trunk #951 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/951/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update BlockManager's comments regarding the invalid block set > -- > > Key: HDFS-2729 > URL: https://issues.apache.org/jira/browse/HDFS-2729 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Affects Versions: 0.23.0 >Reporter: Harsh J >Assignee: Harsh J >Priority: Minor > Fix For: 0.23.1 > > Attachments: HDFS-2729.patch > > > Looks like after HDFS-82 was covered at some point, the comments and logs > still carry presence of two sets when there really is just one set. > This patch changes the logs and comments to be more accurate about that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer
[ https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181957#comment-13181957 ] Hudson commented on HDFS-2349: -- Integrated in Hadoop-Mapreduce-trunk #951 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/951/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > DN should log a WARN, not an INFO when it detects a corruption during block > transfer > > > Key: HDFS-2349 > URL: https://issues.apache.org/jira/browse/HDFS-2349 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.20.204.0 >Reporter: Harsh J >Assignee: Harsh J >Priority: Trivial > Fix For: 0.23.1 > > Attachments: HDFS-2349.diff > > > Currently, in DataNode.java, we have: > {code} > LOG.info("Can't replicate block " + block > + " because on-disk length " + onDiskLength > + " is shorter than NameNode recorded length " + > block.getNumBytes()); > {code} > This log is better off as a WARN as it indicates (and also reports) a > corruption. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1314) dfs.blocksize accepts only absolute value
[ https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181958#comment-13181958 ] Hudson commented on HDFS-1314: -- Integrated in Hadoop-Mapreduce-trunk #951 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/951/]) Merged HDFS-2349, HDFS-2729, HDFS-2726, HDFS-554, HDFS-1314, HADOOP-7910 to branch-0.23, updating CHANGES.txt for trunk. harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228567 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > dfs.blocksize accepts only absolute value > - > > Key: HDFS-1314 > URL: https://issues.apache.org/jira/browse/HDFS-1314 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0 >Reporter: Karim Saadah >Assignee: Sho Shimauchi >Priority: Minor > Labels: newbie > Fix For: 0.23.1 > > Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, > hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt > > > Using "dfs.block.size=8388608" works > but "dfs.block.size=8mb" does not. > Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException. > (http://pastebin.corp.yahoo.com/56129) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2709) HA: Appropriately handle error conditions in EditLogTailer
[ https://issues.apache.org/jira/browse/HDFS-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13181975#comment-13181975 ] Hudson commented on HDFS-2709: -- Integrated in Hadoop-Hdfs-HAbranch-build #40 (See [https://builds.apache.org/job/Hadoop-Hdfs-HAbranch-build/40/]) HDFS-2709. Appropriately handle error conditions in EditLogTailer. Contributed by Aaron T. Myers. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1228390 Files : * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-1623.txt * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputException.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogRace.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileJournalManager.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecurityTokenEditLog.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailureToReadEdits.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java * /hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java > HA: Appropriately handle error conditions in EditLogTailer > -- > > Key: HDFS-2709 > URL: https://issues.apache.org/jira/browse/HDFS-2709 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Todd Lipcon >Assignee: Aaron T. Myers >Priority: Critical > Fix For: HA branch (HDFS-1623) > > Attachments: HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, > HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, > HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, > HDFS-2709-HDFS-1623.patch, HDFS-2709-HDFS-1623.patch, > HDFS-2709-HDFS-1623.patch > > > Currently if the edit log tailer experiences an error replaying edits in the > middle of a file, it will go back to retrying from the beginning of the file > on the next tailing iteration. This is incorrect since many of the edits will > have already been replayed, and not all edits are idempotent. > Instead, we either need to (a) support reading from the middle of a finalized > file (ie skip those edits already applied), or (b) abort the standby if it > hits an error while tailing. If "a" isn't simple, let's do "b" for now and > come back to 'a' later since this is a rare circumstance and better to abort > than be incorrect. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-891) DataNode.instantiateDataNode calls system.exit(-1) if conf.get("dfs.network.script") != null
[ https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-891: - Target Version/s: 0.24.0, 0.23.1 Status: Patch Available (was: Open) > DataNode.instantiateDataNode calls system.exit(-1) if > conf.get("dfs.network.script") != null > > > Key: HDFS-891 > URL: https://issues.apache.org/jira/browse/HDFS-891 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HDFS-891.patch > > > Looking at the code for {{DataNode.instantiateDataNode())} , I see that it > calls {{system.exit(-1)}} if it is not happy with the configuration > {code} > if (conf.get("dfs.network.script") != null) { > LOG.error("This configuration for rack identification is not supported" > + > " anymore. RackID resolution is handled by the NameNode."); > System.exit(-1); > } > {code} > This is excessive. It should throw an exception and let whoever called the > method decide how to handle it. The {{DataNode.main()}} method will log the > exception and exit with a -1 value, but other callers (such as anything using > {{MiniDFSCluster}} will now see a meaningful message rather than some Junit > "tests exited without completing" warning. > Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} > with this configuration set, see what happens. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-891) DataNode.instantiateDataNode calls system.exit(-1) if conf.get("dfs.network.script") != null
[ https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-891: - Attachment: HDFS-891.patch I think we can do away with this check. It is a wrong prop name today, and even if it does exist in the configuration, its not an issue if we already ignore it. Patch that gets rid of this legacy check. > DataNode.instantiateDataNode calls system.exit(-1) if > conf.get("dfs.network.script") != null > > > Key: HDFS-891 > URL: https://issues.apache.org/jira/browse/HDFS-891 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HDFS-891.patch > > > Looking at the code for {{DataNode.instantiateDataNode())} , I see that it > calls {{system.exit(-1)}} if it is not happy with the configuration > {code} > if (conf.get("dfs.network.script") != null) { > LOG.error("This configuration for rack identification is not supported" > + > " anymore. RackID resolution is handled by the NameNode."); > System.exit(-1); > } > {code} > This is excessive. It should throw an exception and let whoever called the > method decide how to handle it. The {{DataNode.main()}} method will log the > exception and exit with a -1 value, but other callers (such as anything using > {{MiniDFSCluster}} will now see a meaningful message rather than some Junit > "tests exited without completing" warning. > Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} > with this configuration set, see what happens. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-973) DataNode.setDataNode() considered dangerous
[ https://issues.apache.org/jira/browse/HDFS-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J resolved HDFS-973. -- Resolution: Cannot Reproduce This does not seem to be a problem in 0.23+, reading the DataNode instantiation code. > DataNode.setDataNode() considered dangerous > --- > > Key: HDFS-973 > URL: https://issues.apache.org/jira/browse/HDFS-973 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Priority: Minor > > I don't have any plans to address this, but it seems to me that having the > DataNode save a reference to itself in its constructor by way of > {{DataNode.setDataNode(this)}} is hazardous. > # The reference could be used before the constructor has finished, especially > when subclasses are involved > # Callers may assume the DN is actually live > # If startup fails, the DN tries to shut down, but the reference hangs > around. Dangerous as well as leaking a reference > # The reference gets retained forever > # It's a singleton that will get confused if >1 DN gets instantiated in-VM > The likely way these problems will surface are in race conditions that are > more likely the more cores you have on the machine -production rather than > development. This is why it is dangerous. > As part of the service lifecycle patch, I could have this reference only set > when the service gets started, set it to null when stopped (and the > reference==this). But really the singleton should be removed altogether, > somehow. There are methods in DataNode, DataStorage, FSDataset and the > namenode that do this, and they should somehow get a reference to any in-VM > DN in a cleaner way. For example, servlets can have it set as servlet context. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1564) Make dfs.datanode.du.reserved configurable per volume
[ https://issues.apache.org/jira/browse/HDFS-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182009#comment-13182009 ] Harsh J commented on HDFS-1564: --- I wonder if this still makes sense to have, and if we should just close it and keep the existing behavior which seems to work well enough for many users (Not many seem to seek nor complain over this). Rita's way would incur too many configuration additions, considering that we have DNs today with 8-12 disks or even more. Having a comma-list of reservations is also expensive (and confusing) to maintain, and an admin can easily make mistakes when adding/removing volumes to the data dir set. I propose that we close this as Won't Fix for now, until there is sufficient user-driven need again. I've simply not seen enough need for this, and setting a reserved value across disks also makes sense. > Make dfs.datanode.du.reserved configurable per volume > - > > Key: HDFS-1564 > URL: https://issues.apache.org/jira/browse/HDFS-1564 > Project: Hadoop HDFS > Issue Type: New Feature > Components: data-node >Reporter: Aaron T. Myers >Priority: Minor > > In clusters with DNs which have heterogeneous data dir volumes, it would be > nice if dfs.datanode.du.reserved could be configured per-volume. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1587) JVM crash under writes: ExceptionMark destructor expects no pending exceptions
[ https://issues.apache.org/jira/browse/HDFS-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J resolved HDFS-1587. --- Resolution: Not A Problem This looked to be cause jars were updated during the runtime. The mail thread http://comments.gmane.org/gmane.comp.java.openjdk.hotspot.gc.devel/2422 has some more info. Resolving as Not A Problem cause it was not a Hadoop fault as determined above. Recorded here already for folks who may search later. > JVM crash under writes: ExceptionMark destructor expects no pending > exceptions > > > Key: HDFS-1587 > URL: https://issues.apache.org/jira/browse/HDFS-1587 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.20.2 > Environment: uname:Linux 2.6.32-25-generic #44-Ubuntu SMP Fri Sep 17 > 20:05:27 UTC 2010 x86_64 > libc:glibc 2.11.1 NPTL 2.11.1 > rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 64000, AS infinity > load average:0.32 0.25 0.20 > CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 15 stepping > 6, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3 > Memory: 4k page, physical 4057424k(108876k free), swap 24009100k(24009100k > free) > vm_info: Java HotSpot(TM) 64-Bit Server VM (16.3-b01) for linux-amd64 JRE > (1.6.0_20-b02), built on Apr 12 2010 13:57:11 by "java_re" with gcc 3.2.2 > (SuSE Linux) > time: Wed Jan 19 13:01:35 2011 > elapsed time: 1983565 seconds >Reporter: Stuart Smith >Priority: Minor > > Datanode went down due to JVM fault. > A decent number of reads/writes going on at the time. > To be honest, I'm not too worried about this, because it hasn't happened too > frequently. > But I thought I'd just record it. > hs_error_pid*log was: > # > # A fatal error has been detected by the Java Runtime Environment: > # > # Internal Error (exceptions.cpp:364), pid=1662, tid=140679121118992 > # Error: ExceptionMark destructor expects no pending exceptions > # > # JRE version: 6.0_20-b02 > # Java VM: Java HotSpot(TM) 64-Bit Server VM (16.3-b01 mixed mode linux-amd64 > ) > # If you would like to submit a bug report, please visit: > # http://java.sun.com/webapps/bugreport/crash.jsp > # > --- T H R E A D --- > Current thread (0x7ff26433): JavaThread "IPC Server handler 0 on > 50020" daemon [_thread_in_vm, id=1755, > stack(0x7ff268faa000,0x7ff2690ab000)] > Stack: [0x7ff268faa000,0x7ff2690ab000], sp=0x7ff2690a4680, free > space=3e90018k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > V [libjvm.so+0x70f420] > V [libjvm.so+0x2e43f6] > V [libjvm.so+0xfd] > V [libjvm.so+0x27ea18] > V [libjvm.so+0x27e062] > V [libjvm.so+0x27e0e6] > V [libjvm.so+0x27faf6] > V [libjvm.so+0x6a0c6f] > V [libjvm.so+0x69edbb] > V [libjvm.so+0x69dc51] > V [libjvm.so+0x69dc80] > V [libjvm.so+0x44251b] > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > j > java.lang.ClassLoader.findBootstrapClass(Ljava/lang/String;)Ljava/lang/Class;+0 > j > java.lang.ClassLoader.findBootstrapClassOrNull(Ljava/lang/String;)Ljava/lang/Class;+12 > j java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+32 > j java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+23 > j > sun.misc.Launcher$AppClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+41 > j java.lang.ClassLoader.loadClass(Ljava/lang/String;)Ljava/lang/Class;+3 > j > java.util.ResourceBundle$RBClassLoader.loadClass(Ljava/lang/String;)Ljava/lang/Class;+10 > j > java.util.ResourceBundle$Control.newBundle(Ljava/lang/String;Ljava/util/Locale;Ljava/lang/String;Ljava/lang/ClassLoader;Z)Ljava/util/ResourceBundle;+24 > j > java.util.ResourceBundle.loadBundle(Ljava/util/ResourceBundle$CacheKey;Ljava/util/List;Ljava/util/ResourceBundle$Control;Z)Ljava/util/ResourceBundle;+54 > j > java.util.ResourceBundle.findBundle(Ljava/util/ResourceBundle$CacheKey;Ljava/util/List;Ljava/util/List;ILjava/util/ResourceBundle$Control;Ljava/util/ResourceBundle;)Ljava/util/ResourceBundle;+213 > j > java.util.ResourceBundle.findBundle(Ljava/util/ResourceBundle$CacheKey;Ljava/util/List;Ljava/util/List;ILjava/util/ResourceBundle$Control;Ljava/util/ResourceBundle;)Ljava/util/ResourceBundle;+37 > j > java.util.ResourceBundle.getBundleImpl(Ljava/lang/String;Ljava/util/Locale;Ljava/lang/ClassLoader;Ljava/util/ResourceBundle$Control;)Ljava/util/ResourceBundle;+187 > j > java.util.ResourceBundle.getBundle(Ljava/lang/String;)Ljava/util/ResourceBundle;+10 > j com.sun.security.auth.PolicyFile$1.run()Ljava/lang/Object;+2 > v ~StubRoutines::call_stub > j > java.security.AccessController.doPrivileged(Ljava/security/PrivilegedAction;)Ljava/lang/Object;+0 > j com.sun.
[jira] [Commented] (HDFS-1724) Can not browse the Data Node blocks from NameNode UI if there is no Data Node hostname mappings in clients.
[ https://issues.apache.org/jira/browse/HDFS-1724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182017#comment-13182017 ] Harsh J commented on HDFS-1724: --- I'd argue that this is a client-end/environment issue, and that we shouldn't hack up the web services to use IPs when redirecting requests. If you agree, we can resolve this as Won't Fix. > Can not browse the Data Node blocks from NameNode UI if there is no Data Node > hostname mappings in clients. > --- > > Key: HDFS-1724 > URL: https://issues.apache.org/jira/browse/HDFS-1724 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node, hdfs client >Reporter: Uma Maheswara Rao G >Priority: Minor > > In Our Environment, we don't have any host mappings related to Data Nodes. > When we browse the Blocks information from Name Node UI, it is using hostname > to connect, So not able to connect. Instead of using the hostname mapping, if > we use IP address directly, things will work fine. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1724) Can not browse the Data Node blocks from NameNode UI if there is no Data Node hostname mappings in clients.
[ https://issues.apache.org/jira/browse/HDFS-1724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G resolved HDFS-1724. --- Resolution: Not A Problem Release Note: Clients should have host mappings configured. > Can not browse the Data Node blocks from NameNode UI if there is no Data Node > hostname mappings in clients. > --- > > Key: HDFS-1724 > URL: https://issues.apache.org/jira/browse/HDFS-1724 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node, hdfs client >Reporter: Uma Maheswara Rao G >Priority: Minor > > In Our Environment, we don't have any host mappings related to Data Nodes. > When we browse the Blocks information from Name Node UI, it is using hostname > to connect, So not able to connect. Instead of using the hostname mapping, if > we use IP address directly, things will work fine. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-739) "Go to parent directory" link broken running on windows
[ https://issues.apache.org/jira/browse/HDFS-739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182024#comment-13182024 ] Harsh J commented on HDFS-739: -- I think using File API for that is a bad idea. We should use Path. > "Go to parent directory" link broken running on windows > --- > > Key: HDFS-739 > URL: https://issues.apache.org/jira/browse/HDFS-739 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.20.1 > Environment: windows >Reporter: zhouyanming > Original Estimate: 5m > Remaining Estimate: 5m > > src/webapps/datanode/browseDirectory.jsp > File f = new File(dir); > String parent; > if ((parent = f.getParent()) != null) > out.print("Go to parent directory"); > File.getParent() return separator "\" not "/" -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-891) DataNode.instantiateDataNode calls system.exit(-1) if conf.get("dfs.network.script") != null
[ https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182025#comment-13182025 ] Hadoop QA commented on HDFS-891: -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12509791/HDFS-891.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javadoc. The javadoc tool appears to have generated 21 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. -1 release audit. The applied patch generated 1 release audit warnings (more than the trunk's current 0 warnings). +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1766//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/1766//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/1766//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1766//console This message is automatically generated. > DataNode.instantiateDataNode calls system.exit(-1) if > conf.get("dfs.network.script") != null > > > Key: HDFS-891 > URL: https://issues.apache.org/jira/browse/HDFS-891 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HDFS-891.patch > > > Looking at the code for {{DataNode.instantiateDataNode())} , I see that it > calls {{system.exit(-1)}} if it is not happy with the configuration > {code} > if (conf.get("dfs.network.script") != null) { > LOG.error("This configuration for rack identification is not supported" > + > " anymore. RackID resolution is handled by the NameNode."); > System.exit(-1); > } > {code} > This is excessive. It should throw an exception and let whoever called the > method decide how to handle it. The {{DataNode.main()}} method will log the > exception and exit with a -1 value, but other callers (such as anything using > {{MiniDFSCluster}} will now see a meaningful message rather than some Junit > "tests exited without completing" warning. > Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} > with this configuration set, see what happens. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-583) HDFS should enforce a max block size
[ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-583: - Component/s: (was: data-node) name-node Summary: HDFS should enforce a max block size (was: DataNode should enforce a max block size) > HDFS should enforce a max block size > > > Key: HDFS-583 > URL: https://issues.apache.org/jira/browse/HDFS-583 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Reporter: Hairong Kuang > > When DataNode creates a replica, it should enforce a max block size, so > clients can't go crazy. One way of enforcing this is to make > BlockWritesStreams to be filter steams that check the block size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1863) When all data directory volumes pulled out in DataNode, Its better to shutdown.
[ https://issues.apache.org/jira/browse/HDFS-1863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J resolved HDFS-1863. --- Resolution: Cannot Reproduce This doesn't seem to be a problem on 0.23 given Bharath's work (see comments above). > When all data directory volumes pulled out in DataNode, Its better to > shutdown. > --- > > Key: HDFS-1863 > URL: https://issues.apache.org/jira/browse/HDFS-1863 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.20.1 >Reporter: Brahma >Priority: Minor > > When we pulled out all the data directory volumes in DataNode, > it is not shuttingdown. Because of this NameNode is keep selecting this DN > also for write. > But datanode is saying 'No available volumes' and throwing exception. > Instead of this, we can shutdown the DataNode. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-664) Add a way to efficiently replace a disk in a live datanode
[ https://issues.apache.org/jira/browse/HDFS-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-664: - Resolution: Duplicate Status: Resolved (was: Patch Available) Dupe of HDFS-1362, per comments above. > Add a way to efficiently replace a disk in a live datanode > -- > > Key: HDFS-664 > URL: https://issues.apache.org/jira/browse/HDFS-664 > Project: Hadoop HDFS > Issue Type: New Feature > Components: data-node >Affects Versions: 0.22.0 >Reporter: Steve Loughran > Attachments: HDFS-664.0-20-3-rc2.patch.1, HDFS-664.patch > > > In clusters where the datanode disks are hot swappable, you need to be able > to swap out a disk on a live datanode without taking down the datanode. You > don't want to decommission the whole node as that is overkill. on a system > with 4 1TB HDDs, giving 3 TB of datanode storage, a decommissioning and > restart will consume up to 6 TB of bandwidth. If a single disk were swapped > in then there would only be 1TB of data to recover over the network. More > importantly, if that data could be moved to free space on the same machine, > the recommissioning could take place at disk rates, not network speeds. > # Maybe have a way of decommissioning a single disk on the DN; the files > could be moved to space on the other disks or the other machines in the rack. > # There may not be time to use that option, in which case pulling out the > disk would be done with no warning, a new disk inserted. > # The DN needs to see that a disk has been replaced (or react to some ops > request telling it this), and start using the new disk again -pushing back > data, rebuilding the balance. > To complicate the process, assume there is a live TT on the system, running > jobs against the data. The TT would probably need to be paused while the work > takes place, any ongoing work handled somehow. Halting the TT and then > restarting it after the replacement disk went in is probably simplest. > The more disks you add to a node, the more this scenario becomes a need. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2767) HA: ConfiguredFailoverProxyProvider should support NameNodeProtocol
HA: ConfiguredFailoverProxyProvider should support NameNodeProtocol --- Key: HDFS-2767 URL: https://issues.apache.org/jira/browse/HDFS-2767 Project: Hadoop HDFS Issue Type: Sub-task Components: ha, hdfs client Affects Versions: HA branch (HDFS-1623) Reporter: Uma Maheswara Rao G Priority: Blocker Presentely ConfiguredFailoverProxyProvider supports ClinetProtocol. It should support NameNodeProtocol also, because Balancer uses NameNodeProtocol for getting blocks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-583) HDFS should enforce a max block size
[ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182034#comment-13182034 ] Harsh J commented on HDFS-583: -- We should cap the DFSClient as well, as it'd help save an RPC call if early-detected. The remaining argument would be: What's the best default? 8g? I'm also sorta -0 on this since we've not limited this before and if folks have been writing really huge files with very large block sizes in their HDFS already, they'd be upset at this behavior change. > HDFS should enforce a max block size > > > Key: HDFS-583 > URL: https://issues.apache.org/jira/browse/HDFS-583 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Reporter: Hairong Kuang > > When DataNode creates a replica, it should enforce a max block size, so > clients can't go crazy. One way of enforcing this is to make > BlockWritesStreams to be filter steams that check the block size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-1273) Handle disk failure when writing new blocks on datanode
[ https://issues.apache.org/jira/browse/HDFS-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-1273: -- Resolution: Duplicate Status: Resolved (was: Patch Available) Resolving based on Uma's comment above. > Handle disk failure when writing new blocks on datanode > --- > > Key: HDFS-1273 > URL: https://issues.apache.org/jira/browse/HDFS-1273 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.21.0 >Reporter: Jeff Zhang >Assignee: Jeff Zhang > Attachments: HDFS_1273.patch > > > This issues relates to HDFS-457, in the patch of HDFS-457 only disk failure > when reading is handled. This jira is to handle the disk failure when writing > new blocks on data node. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-252) Export the HDFS file system through a NFS protocol
[ https://issues.apache.org/jira/browse/HDFS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182035#comment-13182035 ] Brock Noland commented on HDFS-252: --- I wrote a NFS4 proxy for HDFS which seems to work well. The lack of an open/close operator in NFS3 drove me towards NFS4. It's Apache licensed and located here: https://github.com/brockn/hdfs-nfs-proxy > Export the HDFS file system through a NFS protocol > -- > > Key: HDFS-252 > URL: https://issues.apache.org/jira/browse/HDFS-252 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: dhruba borthakur >Assignee: dhruba borthakur > Attachments: nfshadoop.tar.gz > > > It would be nice if can expose the HDFS filesystem using the NFS protocol. > There are a couple of options that I could find: > 1. Use a user space C-language-implementation of a NFS server and then use > the libhdfs API to integrate that code with Hadoop. There is such an > implementation available at > http://sourceforge.net/project/showfiles.php?group_id=66203. > 2. Use a user space Java implementation of a NFS server and then integrate > it with HDFS using Java API. There is such an implementation of NFS server at > http://void.org/~steven/jnfs/. > I have experimented with Option 2 and have written a first version of the > Hadoop integration. I am attaching the code for your preliminary feedback. > This implementation of the Java NFS server has one limitation: it supports > UDP only. Some licensing issues will have to be sorted out before it can be > used. Steve (the writer of the NFS server implemenation) has told me that he > can change the licensing of the code if needed. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2592) HA: Balancer support for HA namenodes
[ https://issues.apache.org/jira/browse/HDFS-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182037#comment-13182037 ] Uma Maheswara Rao G commented on HDFS-2592: --- Balancer HA support completed for ClinetProtocol, Pending support for NameNodeProtocol api used in Balancer. Currently ConfiguredFailoverProxyProvider supports ClinetProtocol. Filed JIRA for NameNodeProtocol support HDFS-2767. > HA: Balancer support for HA namenodes > - > > Key: HDFS-2592 > URL: https://issues.apache.org/jira/browse/HDFS-2592 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer, ha >Affects Versions: HA branch (HDFS-1623) >Reporter: Todd Lipcon >Assignee: Uma Maheswara Rao G > > The balancer currently interacts directly with namenode InetSocketAddresses > and makes its own IPC proxies. We need to integrate it with HA so that it > uses the same client failover infrastructure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1960) dfs.*.dir should not default to /tmp (or other typically volatile storage)
[ https://issues.apache.org/jira/browse/HDFS-1960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182041#comment-13182041 ] Harsh J commented on HDFS-1960: --- Another perhaps-working default could probably be a relative path "{{.}}". I think this is an issue that could largely be solved by packaging and better administrator's documentation. > dfs.*.dir should not default to /tmp (or other typically volatile storage) > -- > > Key: HDFS-1960 > URL: https://issues.apache.org/jira/browse/HDFS-1960 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.20.2 > Environment: *nix systems >Reporter: philo vivero >Priority: Critical > > The hdfs-site.xml file possibly will not define one or both of: > dfs.name.dir > dfs.data.dir > If they are not specified, data is stored in /tmp. This is extremely > dangerous. Rationale: the cluster will work fine for days, possibly even > weeks, before blocks will start to go missing. Rebooting a datanode on common > Linux systems will clear all the data from that node. There is no documented > way (that I'm aware of) to recover the situation. The cluster must be > completely obliterated and rebuilt from scratch. > Better reactions to the missing configuration parameters: > 1. DataNode dies on startup and asks that these parameters be defined. > 2. Default is /var/db/hadoop (or some other non-volatile storage location). > Naturally, inability to write into that directory leads to DataNode to die on > startup, logging error. > The first solution would be most likely preferred by typical enterprise > sysadmins. The second solution is suboptimal (since /var/db/hadoop might not > be the optimal location for the data) but is still preferable to the current > implementation, since it will less often lead to an irretrievably corrupt > cluster. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2768) BackupNode stop can not close proxy connections because it is not a proxy instance.
BackupNode stop can not close proxy connections because it is not a proxy instance. --- Key: HDFS-2768 URL: https://issues.apache.org/jira/browse/HDFS-2768 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Observe this from BackupNode tests: java.lang.IllegalArgumentException: not a proxy instance at java.lang.reflect.Proxy.getInvocationHandler(Unknown Source) at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:557) at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:194) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:355) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testBackupNode(TestBackupNode.java:241) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at junit.framework.TestCase.runTest(TestCase.java:168) at junit.framework.TestCase.runBare(TestCase.java:134) at junit.framework.TestResult$1.protect(TestResult.java:110) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HDFS-2764) HA: TestBackupNode is failing
[ https://issues.apache.org/jira/browse/HDFS-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G reassigned HDFS-2764: - Assignee: Uma Maheswara Rao G > HA: TestBackupNode is failing > - > > Key: HDFS-2764 > URL: https://issues.apache.org/jira/browse/HDFS-2764 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Aaron T. Myers >Assignee: Uma Maheswara Rao G > > Looks like it has been for a few days. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2764) HA: TestBackupNode is failing
[ https://issues.apache.org/jira/browse/HDFS-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182049#comment-13182049 ] Uma Maheswara Rao G commented on HDFS-2764: --- I ran this tests several times, for me all the time they are passing. Aaron, Still failing for you? > HA: TestBackupNode is failing > - > > Key: HDFS-2764 > URL: https://issues.apache.org/jira/browse/HDFS-2764 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Aaron T. Myers >Assignee: Uma Maheswara Rao G > > Looks like it has been for a few days. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2764) HA: TestBackupNode is failing
[ https://issues.apache.org/jira/browse/HDFS-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182051#comment-13182051 ] Aaron T. Myers commented on HDFS-2764: -- I don't have time just this moment, but I'll take a look again later today or tomorrow. When I reported this issue, it failed for me 3-4 times in a row. Feel free to re-assign it to me and I'll take care of it. > HA: TestBackupNode is failing > - > > Key: HDFS-2764 > URL: https://issues.apache.org/jira/browse/HDFS-2764 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha, name-node >Affects Versions: HA branch (HDFS-1623) >Reporter: Aaron T. Myers >Assignee: Uma Maheswara Rao G > > Looks like it has been for a few days. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-233) Support for snapshots
[ https://issues.apache.org/jira/browse/HDFS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182056#comment-13182056 ] Joe Kraska commented on HDFS-233: - Reviewing the comments and noting the dataware housing feature requests and the like, I thought I would comment on the snapshot feature from the more pragmatic perspective of simple, responsible data stewardship. By and large, the most important features of snapshots are being able to: 1. Do them live. 2. Do them economically: do not require particularly large amounts of space for the snapshot. 3. Being able to have a dozen or so (and often less). 4. Being able to schedule them (hourly, daily, weekly, with emphasis on the latter two) 5. Being able to selectively restore portions of the tree due to user- or program- caused erasure or damage 6. Being able to quickly conduct a restore of either a sub portion of the tree or an entire volume. The above set of features are about fundamental data protection, cost, and restore time objectives. They are directly related to economical data stewardship, and are considered the first line of defense for data protection in many enterprises today. I.e., we data stewards prefer these features over tape restores (although we also use tape, we hate it). *AFTER* the above, space-efficient *writable* snapshots are interesting. This is because there are applications for test for current data sets where touching the master data set is a complete no-no, but the application needs to make trial changes. These snapshots are often made, modified for a while, then deleted. You will want minimal performance impact for these snapshots, because the assumption should be that the scheduled snapshot system is ALWAYS used. The one exception to this is static read-only data where a single manual snapshot is recorded just once. Everything else will have something like 2 daily and 2 weekly snapshots going all the time. Some enterprises will also use hourly snapshots scheduled every 6 hours or so and retain about a day of those... As a side note (and no offense to the hadoop community), I regard all shared storage as defective for data stewardship purpose if it does not have the above features (except writable snapshots, that's candy), and I am not the least bit alone. Any data protection strategy that says "go to tape for that" as its first offer is... onerous. While the following matter is merely my opinion, I feel pretty sure that the rise of the enterprise NAS appliance (e.g., NetApp et al) is at least partly due to the default nature of snapshot protection on those devices. Food for thought. > Support for snapshots > - > > Key: HDFS-233 > URL: https://issues.apache.org/jira/browse/HDFS-233 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: dhruba borthakur >Assignee: dhruba borthakur > Attachments: Snapshots.pdf, Snapshots.pdf > > > Support HDFS snapshots. It should support creating snapshots without shutting > down the file system. Snapshot creation should be lightweight and a typical > system should be able to support a few thousands concurrent snapshots. There > should be a way to surface (i.e. mount) a few of these snapshots > simultaneously. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-233) Support for snapshots
[ https://issues.apache.org/jira/browse/HDFS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182058#comment-13182058 ] Philip Zeyliger commented on HDFS-233: -- I will return to the office on January 17th. For urgent matters, please contact Aparna or Philip L. The week between Christmas and New Years', your best bet is Chris L. Thanks, -- Philip > Support for snapshots > - > > Key: HDFS-233 > URL: https://issues.apache.org/jira/browse/HDFS-233 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: dhruba borthakur >Assignee: dhruba borthakur > Attachments: Snapshots.pdf, Snapshots.pdf > > > Support HDFS snapshots. It should support creating snapshots without shutting > down the file system. Snapshot creation should be lightweight and a typical > system should be able to support a few thousands concurrent snapshots. There > should be a way to surface (i.e. mount) a few of these snapshots > simultaneously. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-233) Support for snapshots
[ https://issues.apache.org/jira/browse/HDFS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun C Murthy updated HDFS-233: --- Comment: was deleted (was: I will return to the office on January 17th. For urgent matters, please contact Aparna or Philip L. The week between Christmas and New Years', your best bet is Chris L. Thanks, -- Philip ) > Support for snapshots > - > > Key: HDFS-233 > URL: https://issues.apache.org/jira/browse/HDFS-233 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: dhruba borthakur >Assignee: dhruba borthakur > Attachments: Snapshots.pdf, Snapshots.pdf > > > Support HDFS snapshots. It should support creating snapshots without shutting > down the file system. Snapshot creation should be lightweight and a typical > system should be able to support a few thousands concurrent snapshots. There > should be a way to surface (i.e. mount) a few of these snapshots > simultaneously. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira