[jira] [Created] (HDFS-2530) Add testcases for -n option of FSshell cat
Add testcases for -n option of FSshell cat -- Key: HDFS-2530 URL: https://issues.apache.org/jira/browse/HDFS-2530 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 0.24.0 Reporter: XieXianshan Priority: Trivial Fix For: 0.24.0 Add test cases for HADOOP-7795. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2530) Add testcases for -n option of FSshell cat
[ https://issues.apache.org/jira/browse/HDFS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] XieXianshan updated HDFS-2530: -- Attachment: HDFS-2530.patch Add testcases for -n option of FSshell cat -- Key: HDFS-2530 URL: https://issues.apache.org/jira/browse/HDFS-2530 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 0.24.0 Reporter: XieXianshan Priority: Trivial Fix For: 0.24.0 Attachments: HDFS-2530.patch Add test cases for HADOOP-7795. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2308) NamenodeProtocol.endCheckpoint only needs to take the read lock
[ https://issues.apache.org/jira/browse/HDFS-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142099#comment-13142099 ] Hudson commented on HDFS-2308: -- Integrated in Hadoop-Hdfs-0.23-Build #64 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/64/]) HDFS-2308. svn merge -c 1196113 from trunk eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196115 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job NamenodeProtocol.endCheckpoint only needs to take the read lock --- Key: HDFS-2308 URL: https://issues.apache.org/jira/browse/HDFS-2308 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Affects Versions: 0.24.0 Reporter: Aaron T. Myers Assignee: Eli Collins
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142098#comment-13142098 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Hdfs-0.23-Build #64 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/64/]) Revert the previous commit 1196436 for HDFS-2416. svn merge -c 1196434 from trunk for HDFS-2416. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196439 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196436 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is
[jira] [Commented] (HDFS-2002) Incorrect computation of needed blocks in getTurnOffTip()
[ https://issues.apache.org/jira/browse/HDFS-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142102#comment-13142102 ] Hudson commented on HDFS-2002: -- Integrated in Hadoop-Hdfs-0.23-Build #64 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/64/]) HDFS-2002. Incorrect computation of needed blocks in getTurnOffTip(). Contributed by Plamen Jeliazkov. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196229 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSafeMode.java Incorrect computation of needed blocks in getTurnOffTip() - Key: HDFS-2002 URL: https://issues.apache.org/jira/browse/HDFS-2002 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.22.0 Reporter: Konstantin Shvachko Assignee: Plamen Jeliazkov Labels: newbie Fix For: 0.22.0, 0.23.0, 0.24.0 Attachments: HADOOP-2002_TRUNK.patch, hdfs-2002.patch, testsafemode.patch, testsafemode.patch {{SafeModeInfo.getTurnOffTip()}} under-reports the number of blocks needed to reach the safemode threshold. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142112#comment-13142112 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Hdfs-trunk #851 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/851/]) HDFS-2416. distcp with a webhdfs uri on a secure cluster fails. HADOOP-7792. Common component for HDFS-2416: Add verifyToken method to AbstractDelegationTokenSecretManager. jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196434 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196386 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2526) (Client)NamenodeProtocolTranslatorR23 do not need to keep a reference to rpcProxyWithoutRetry
[ https://issues.apache.org/jira/browse/HDFS-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142117#comment-13142117 ] Hudson commented on HDFS-2526: -- Integrated in Hadoop-Hdfs-trunk #851 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/851/]) HDFS-2526. (Client)NamenodeProtocolTranslatorR23 do not need to keep a reference to rpcProxyWithoutRetry (atm) atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196171 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeProtocolTranslatorR23.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/NamenodeProtocolTranslatorR23.java (Client)NamenodeProtocolTranslatorR23 do not need to keep a reference to rpcProxyWithoutRetry - Key: HDFS-2526 URL: https://issues.apache.org/jira/browse/HDFS-2526 Project: Hadoop HDFS Issue Type: Bug Components: hdfs client, name-node Affects Versions: 0.24.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Fix For: 0.24.0 Attachments: HDFS-2526.patch HADOOP-7607 made it so that when wrapping an RPC proxy with another proxy, one need not hold on to the underlying proxy object to release resources on close. Both {{ClientNamenodeProtocolTranslatorR23}} and {{NamenodeProtocolTranslatorR23}} do not take advantage of this fact. They both also unnecessarily declare {{getProxyWithoutRetry}} methods which are unused. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2002) Incorrect computation of needed blocks in getTurnOffTip()
[ https://issues.apache.org/jira/browse/HDFS-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142132#comment-13142132 ] Hudson commented on HDFS-2002: -- Integrated in Hadoop-Mapreduce-0.23-Build #78 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/78/]) HDFS-2002. Incorrect computation of needed blocks in getTurnOffTip(). Contributed by Plamen Jeliazkov. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196229 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSafeMode.java Incorrect computation of needed blocks in getTurnOffTip() - Key: HDFS-2002 URL: https://issues.apache.org/jira/browse/HDFS-2002 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.22.0 Reporter: Konstantin Shvachko Assignee: Plamen Jeliazkov Labels: newbie Fix For: 0.22.0, 0.23.0, 0.24.0 Attachments: HADOOP-2002_TRUNK.patch, hdfs-2002.patch, testsafemode.patch, testsafemode.patch {{SafeModeInfo.getTurnOffTip()}} under-reports the number of blocks needed to reach the safemode threshold. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2308) NamenodeProtocol.endCheckpoint only needs to take the read lock
[ https://issues.apache.org/jira/browse/HDFS-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142129#comment-13142129 ] Hudson commented on HDFS-2308: -- Integrated in Hadoop-Mapreduce-0.23-Build #78 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/78/]) HDFS-2308. svn merge -c 1196113 from trunk eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196115 Files : * /hadoop/common/branches/branch-0.23 * /hadoop/common/branches/branch-0.23/hadoop-common-project * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++ * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java * /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job NamenodeProtocol.endCheckpoint only needs to take the read lock --- Key: HDFS-2308 URL: https://issues.apache.org/jira/browse/HDFS-2308 Project: Hadoop HDFS Issue Type: Improvement Components: name-node Affects Versions: 0.24.0 Reporter: Aaron T. Myers Assignee: Eli
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142128#comment-13142128 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Mapreduce-0.23-Build #78 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/78/]) Revert the previous commit 1196436 for HDFS-2416. svn merge -c 1196434 from trunk for HDFS-2416. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196439 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196436 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message
[jira] [Updated] (HDFS-2531) TestDFSClientExcludedNodes is failing in trunk.
[ https://issues.apache.org/jira/browse/HDFS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-2531: -- Description: FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332) was: FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332) at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:458) at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:450) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:642) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:546) at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:262) at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:86) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:248) at org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.__CLR3_0_2l00fd01905(TestDFSClientExcludedNodes.java:40) at org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes(TestDFSClientExcludedNodes.java:38) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) TestDFSClientExcludedNodes is failing in trunk. Key: HDFS-2531 URL: https://issues.apache.org/jira/browse/HDFS-2531 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314)
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142141#comment-13142141 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Mapreduce-trunk #885 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/885/]) HDFS-2416. distcp with a webhdfs uri on a secure cluster fails. HADOOP-7792. Common component for HDFS-2416: Add verifyToken method to AbstractDelegationTokenSecretManager. jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196434 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196386 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2526) (Client)NamenodeProtocolTranslatorR23 do not need to keep a reference to rpcProxyWithoutRetry
[ https://issues.apache.org/jira/browse/HDFS-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142146#comment-13142146 ] Hudson commented on HDFS-2526: -- Integrated in Hadoop-Mapreduce-trunk #885 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/885/]) HDFS-2526. (Client)NamenodeProtocolTranslatorR23 do not need to keep a reference to rpcProxyWithoutRetry (atm) atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196171 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/ClientNamenodeProtocolTranslatorR23.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolR23Compatible/NamenodeProtocolTranslatorR23.java (Client)NamenodeProtocolTranslatorR23 do not need to keep a reference to rpcProxyWithoutRetry - Key: HDFS-2526 URL: https://issues.apache.org/jira/browse/HDFS-2526 Project: Hadoop HDFS Issue Type: Bug Components: hdfs client, name-node Affects Versions: 0.24.0 Reporter: Aaron T. Myers Assignee: Aaron T. Myers Fix For: 0.24.0 Attachments: HDFS-2526.patch HADOOP-7607 made it so that when wrapping an RPC proxy with another proxy, one need not hold on to the underlying proxy object to release resources on close. Both {{ClientNamenodeProtocolTranslatorR23}} and {{NamenodeProtocolTranslatorR23}} do not take advantage of this fact. They both also unnecessarily declare {{getProxyWithoutRetry}} methods which are unused. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2531) TestDFSClientExcludedNodes is failing in trunk.
[ https://issues.apache.org/jira/browse/HDFS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142171#comment-13142171 ] Uma Maheswara Rao G commented on HDFS-2531: --- TestFileCreationNamenodeRestart also failing with the same error in trunk. java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) TestDFSClientExcludedNodes is failing in trunk. Key: HDFS-2531 URL: https://issues.apache.org/jira/browse/HDFS-2531 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2531) TestDFSClientExcludedNodes is failing in trunk.
[ https://issues.apache.org/jira/browse/HDFS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142175#comment-13142175 ] Uma Maheswara Rao G commented on HDFS-2531: --- This problem will come when some tests are still having the locks on storage directories. Looks the failures are random. TestDFSClientExcludedNodes is not shotting down the cluster, so the next test will fail diffinitely. To find the cause for TestDFSClientExcludedNodes failure, need to check other test where they are not shutting down the cluster. TestDFSClientExcludedNodes is failing in trunk. Key: HDFS-2531 URL: https://issues.apache.org/jira/browse/HDFS-2531 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2531) TestDFSClientExcludedNodes is failing in trunk.
[ https://issues.apache.org/jira/browse/HDFS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142178#comment-13142178 ] Uma Maheswara Rao G commented on HDFS-2531: --- After analysing other tests, found that TestBlocksScheduledCounter is not shutting down the cluster. So, TestDFSClientExcludedNodes might be the next test while runing in Hudson. TestDFSClientExcludedNodes is failing in trunk. Key: HDFS-2531 URL: https://issues.apache.org/jira/browse/HDFS-2531 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Kelly updated HDFS-1580: - Attachment: HDFS-1580.diff Updated patch to trunk Add interface for generic Write Ahead Logging mechanisms Key: HDFS-1580 URL: https://issues.apache.org/jira/browse/HDFS-1580 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ivan Kelly Assignee: Jitendra Nath Pandey Fix For: HA branch (HDFS-1623), 0.24.0 Attachments: EditlogInterface.1.pdf, EditlogInterface.2.pdf, EditlogInterface.3.pdf, HDFS-1580+1521.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142188#comment-13142188 ] jirapos...@reviews.apache.org commented on HDFS-1580: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/2672/ --- Review request for hadoop-hdfs. Summary --- This is the final piece to allow the loading of custom implementations of JournalManager. There is another change HDFS-2334 which adds closeable to JournalManager, but that may not be absolutely necessary for all journal types. (it is for bookkeeper) There's 2 changes: 1) I've changes the interfaces(JournalManager, EditLogInputStream EditLogOutputStream) so that they can be implemented outside of the org.apache.hadoop.hdfs.server.namenode. 2) Pluggable creation of journal managers. When FSEditLog is creating JournalManagers from dfs.namenode.edits.dir, and it encounters a URI with a schema different to file it loads the name of the implementing class from dfs.namenode.edits.journal-plugin.schema. This class must implement JournalManager and have a constructor which takes (Configuration, URI). This addresses bug HDFS-1580. http://issues.apache.org/jira/browse/HDFS-1580 Diffs - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java dd39676 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogBackupInputStream.java 974697d hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogBackupOutputStream.java 067990d hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java 9db7f8a hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java 4780d04 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java c6f8505 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogOutputStream.java 8681837 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java f80f863 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java 991fd08 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java 3adb439 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalManager.java 348e3ef hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java 45b5714 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java a7fa7fb hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java 4d7cfd8 hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGenericJournalConf.java PRE-CREATION Diff: https://reviews.apache.org/r/2672/diff Testing --- Thanks, Ivan Add interface for generic Write Ahead Logging mechanisms Key: HDFS-1580 URL: https://issues.apache.org/jira/browse/HDFS-1580 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ivan Kelly Assignee: Jitendra Nath Pandey Fix For: HA branch (HDFS-1623), 0.24.0 Attachments: EditlogInterface.1.pdf, EditlogInterface.2.pdf, EditlogInterface.3.pdf, HDFS-1580+1521.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142230#comment-13142230 ] Hadoop QA commented on HDFS-1580: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12501965/HDFS-1580.diff against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.server.datanode.TestMulitipleNNDataBlockScanner org.apache.hadoop.hdfs.TestBalancerBandwidth +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1519//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1519//console This message is automatically generated. Add interface for generic Write Ahead Logging mechanisms Key: HDFS-1580 URL: https://issues.apache.org/jira/browse/HDFS-1580 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ivan Kelly Assignee: Jitendra Nath Pandey Fix For: HA branch (HDFS-1623), 0.24.0 Attachments: EditlogInterface.1.pdf, EditlogInterface.2.pdf, EditlogInterface.3.pdf, HDFS-1580+1521.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly
[ https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142235#comment-13142235 ] Ravi Prakash commented on HDFS-1172: Hi Todd! Are you going to be able to finish this patch? Is there anything more to be done than to change the == to .equals() and maybe my other nitpicks? Blocks in newly completed files are considered under-replicated too quickly --- Key: HDFS-1172 URL: https://issues.apache.org/jira/browse/HDFS-1172 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.21.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: HDFS-1172.patch, hdfs-1172.txt, replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch I've seen this for a long time, and imagine it's a known issue, but couldn't find an existing JIRA. It often happens that we see the NN schedule replication on the last block of files very quickly after they're completed, before the other DNs in the pipeline have a chance to report the new block. This results in a lot of extra replication work on the cluster, as we replicate the block and then end up with multiple excess replicas which are very quickly deleted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hairong Kuang updated HDFS-2477: Resolution: Fixed Fix Version/s: 0.24.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I just committed this. Thanks Tomasz! Optimize computing the diff between a block report and the namenode state. -- Key: HDFS-2477 URL: https://issues.apache.org/jira/browse/HDFS-2477 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Tomasz Nykiel Assignee: Tomasz Nykiel Fix For: 0.24.0 Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3, reportDiff.patch-4, reportDiff.patch-5 When a block report is processed at the NN, the BlockManager.reportDiff traverses all blocks contained in the report, and for each one block, which is also present in the corresponding datanode descriptor, the block is moved to the head of the list of the blocks in this datanode descriptor. With HDFS-395 the huge majority of the blocks in the report, are also present in the datanode descriptor, which means that almost every block in the report will have to be moved to the head of the list. Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, which removes a block from a list and then inserts it. In this process, we call findDatanode several times (afair 6 times for each moveBlockToHead call). findDatanode is relatively expensive, since it linearly goes through the triplets to locate the given datanode. With this patch, we do some memoization of findDatanode, so we can reclaim 2 findDatanode calls. Our experiments show that this can improve the reportDiff (which is executed under write lock) by around 15%. Currently with HDFS-395, reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142263#comment-13142263 ] Hudson commented on HDFS-2477: -- Integrated in Hadoop-Hdfs-trunk-Commit #1310 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1310/]) HDFS-2477. Optimize computing the diff between a block report and the namenode state. Contributed by Tomasz Nykiel. hairong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196676 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java Optimize computing the diff between a block report and the namenode state. -- Key: HDFS-2477 URL: https://issues.apache.org/jira/browse/HDFS-2477 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Tomasz Nykiel Assignee: Tomasz Nykiel Fix For: 0.24.0 Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3, reportDiff.patch-4, reportDiff.patch-5 When a block report is processed at the NN, the BlockManager.reportDiff traverses all blocks contained in the report, and for each one block, which is also present in the corresponding datanode descriptor, the block is moved to the head of the list of the blocks in this datanode descriptor. With HDFS-395 the huge majority of the blocks in the report, are also present in the datanode descriptor, which means that almost every block in the report will have to be moved to the head of the list. Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, which removes a block from a list and then inserts it. In this process, we call findDatanode several times (afair 6 times for each moveBlockToHead call). findDatanode is relatively expensive, since it linearly goes through the triplets to locate the given datanode. With this patch, we do some memoization of findDatanode, so we can reclaim 2 findDatanode calls. Our experiments show that this can improve the reportDiff (which is executed under write lock) by around 15%. Currently with HDFS-395, reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142266#comment-13142266 ] Hudson commented on HDFS-2477: -- Integrated in Hadoop-Common-trunk-Commit #1235 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1235/]) HDFS-2477. Optimize computing the diff between a block report and the namenode state. Contributed by Tomasz Nykiel. hairong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196676 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java Optimize computing the diff between a block report and the namenode state. -- Key: HDFS-2477 URL: https://issues.apache.org/jira/browse/HDFS-2477 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Tomasz Nykiel Assignee: Tomasz Nykiel Fix For: 0.24.0 Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3, reportDiff.patch-4, reportDiff.patch-5 When a block report is processed at the NN, the BlockManager.reportDiff traverses all blocks contained in the report, and for each one block, which is also present in the corresponding datanode descriptor, the block is moved to the head of the list of the blocks in this datanode descriptor. With HDFS-395 the huge majority of the blocks in the report, are also present in the datanode descriptor, which means that almost every block in the report will have to be moved to the head of the list. Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, which removes a block from a list and then inserts it. In this process, we call findDatanode several times (afair 6 times for each moveBlockToHead call). findDatanode is relatively expensive, since it linearly goes through the triplets to locate the given datanode. With this patch, we do some memoization of findDatanode, so we can reclaim 2 findDatanode calls. Our experiments show that this can improve the reportDiff (which is executed under write lock) by around 15%. Currently with HDFS-395, reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142297#comment-13142297 ] Hudson commented on HDFS-2477: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1257 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1257/]) HDFS-2477. Optimize computing the diff between a block report and the namenode state. Contributed by Tomasz Nykiel. hairong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196676 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java Optimize computing the diff between a block report and the namenode state. -- Key: HDFS-2477 URL: https://issues.apache.org/jira/browse/HDFS-2477 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Tomasz Nykiel Assignee: Tomasz Nykiel Fix For: 0.24.0 Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3, reportDiff.patch-4, reportDiff.patch-5 When a block report is processed at the NN, the BlockManager.reportDiff traverses all blocks contained in the report, and for each one block, which is also present in the corresponding datanode descriptor, the block is moved to the head of the list of the blocks in this datanode descriptor. With HDFS-395 the huge majority of the blocks in the report, are also present in the datanode descriptor, which means that almost every block in the report will have to be moved to the head of the list. Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, which removes a block from a list and then inserts it. In this process, we call findDatanode several times (afair 6 times for each moveBlockToHead call). findDatanode is relatively expensive, since it linearly goes through the triplets to locate the given datanode. With this patch, we do some memoization of findDatanode, so we can reclaim 2 findDatanode calls. Our experiments show that this can improve the reportDiff (which is executed under write lock) by around 15%. Currently with HDFS-395, reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142315#comment-13142315 ] Tomasz Nykiel commented on HDFS-2477: - Thanks!! Optimize computing the diff between a block report and the namenode state. -- Key: HDFS-2477 URL: https://issues.apache.org/jira/browse/HDFS-2477 Project: Hadoop HDFS Issue Type: Sub-task Components: name-node Reporter: Tomasz Nykiel Assignee: Tomasz Nykiel Fix For: 0.24.0 Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3, reportDiff.patch-4, reportDiff.patch-5 When a block report is processed at the NN, the BlockManager.reportDiff traverses all blocks contained in the report, and for each one block, which is also present in the corresponding datanode descriptor, the block is moved to the head of the list of the blocks in this datanode descriptor. With HDFS-395 the huge majority of the blocks in the report, are also present in the datanode descriptor, which means that almost every block in the report will have to be moved to the head of the list. Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, which removes a block from a list and then inserts it. In this process, we call findDatanode several times (afair 6 times for each moveBlockToHead call). findDatanode is relatively expensive, since it linearly goes through the triplets to locate the given datanode. With this patch, we do some memoization of findDatanode, so we can reclaim 2 findDatanode calls. Our experiments show that this can improve the reportDiff (which is executed under write lock) by around 15%. Currently with HDFS-395, reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142317#comment-13142317 ] jirapos...@reviews.apache.org commented on HDFS-1580: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/2672/#review3014 --- hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java https://reviews.apache.org/r/2672/#comment6727 we use _PREFIX instead of _BASE elsewhere for key prefixes hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java https://reviews.apache.org/r/2672/#comment6728 why not just use conf.getClass here and return a Class? And throw the exception right here instead of returning null and throwing below hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java https://reviews.apache.org/r/2672/#comment6729 this is the wrong layer - better to filter for file:// URLs where this is called, I think. hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGenericJournalConf.java https://reviews.apache.org/r/2672/#comment6730 no need to have any datanodes for any of these tests - will run faster without. hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGenericJournalConf.java https://reviews.apache.org/r/2672/#comment6731 our convention is to use american spelling (initialized) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGenericJournalConf.java https://reviews.apache.org/r/2672/#comment6732 our style is to not have multiple classes per .java file unless they're inner classes. You can make this a static inner class of the test. hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGenericJournalConf.java https://reviews.apache.org/r/2672/#comment6733 just return Mockito.mock(EditLogOutputStream.class) and you don't need to have the whole implementation below - Todd On 2011-11-02 14:33:47, Ivan Kelly wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/2672/ bq. --- bq. bq. (Updated 2011-11-02 14:33:47) bq. bq. bq. Review request for hadoop-hdfs. bq. bq. bq. Summary bq. --- bq. bq. This is the final piece to allow the loading of custom implementations of JournalManager. There is another change HDFS-2334 which adds closeable to JournalManager, but that may not be absolutely necessary for all journal types. (it is for bookkeeper) bq. bq. There's 2 changes: bq. 1) I've changes the interfaces(JournalManager, EditLogInputStream EditLogOutputStream) so that they can be implemented outside of the org.apache.hadoop.hdfs.server.namenode. bq. bq. 2) Pluggable creation of journal managers. bq. When FSEditLog is creating JournalManagers from dfs.namenode.edits.dir, and it encounters a URI with a schema different to file it loads the name of the implementing class from dfs.namenode.edits.journal-plugin.schema. This class must implement JournalManager and have a constructor which takes (Configuration, URI). bq. bq. bq. This addresses bug HDFS-1580. bq. http://issues.apache.org/jira/browse/HDFS-1580 bq. bq. bq. Diffs bq. - bq. bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java dd39676 bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogBackupInputStream.java 974697d bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogBackupOutputStream.java 067990d bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java 9db7f8a bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java 4780d04 bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogInputStream.java c6f8505 bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogOutputStream.java 8681837 bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java f80f863 bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java 991fd08 bq. hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java 3adb439
[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly
[ https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142319#comment-13142319 ] Todd Lipcon commented on HDFS-1172: --- I went back and looked at my branch where I was working on this patch. The remaining work is to add a test which catches the issue you pointed out with == vs .equals. Since the tests were passing even with that glaring mistake, the coverage definitely wasn't good enough. I started to write one and I think I ran into some more issues, but I can't recall what they were. Since this issue has been around forever, I haven't been able to prioritize it above other 0.23 work. Is this causing big issues on your clusters that would suggest it should be prioritized higher? Blocks in newly completed files are considered under-replicated too quickly --- Key: HDFS-1172 URL: https://issues.apache.org/jira/browse/HDFS-1172 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.21.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: HDFS-1172.patch, hdfs-1172.txt, replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch I've seen this for a long time, and imagine it's a known issue, but couldn't find an existing JIRA. It often happens that we see the NN schedule replication on the last block of files very quickly after they're completed, before the other DNs in the pipeline have a chance to report the new block. This results in a lot of extra replication work on the cluster, as we replicate the block and then end up with multiple excess replicas which are very quickly deleted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2528: - Attachment: h2528_2002.patch h2528_2002_0.20s.patch h2528_2002_0.20s.patch h2528_2002.patch: Fixed the unit tests. webhdfs rest call to a secure dn fails when a token is sent --- Key: HDFS-2528 URL: https://issues.apache.org/jira/browse/HDFS-2528 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Tsz Wo (Nicholas), SZE Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, h2528_2002_0.20s.patch curl -L -u : --negotiate -i http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN; the following exception is thrown by the datanode when the redirect happens. {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Call to failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]}} Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2531) TestDFSClientExcludedNodes is failing in trunk.
[ https://issues.apache.org/jira/browse/HDFS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142327#comment-13142327 ] Todd Lipcon commented on HDFS-2531: --- I think this is failing because of TestDfsOverAvroRpc timing out, at least in recent builds I've seen. TestDFSClientExcludedNodes is failing in trunk. Key: HDFS-2531 URL: https://issues.apache.org/jira/browse/HDFS-2531 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G FAILED: org.apache.hadoop.hdfs.TestDFSClientExcludedNodes.testExcludedNodes Error Message: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:586) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:435) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:169) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:371) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:314) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:298) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:332) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-2244) JMX values for RPC Activity is always zero
[ https://issues.apache.org/jira/browse/HDFS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-2244. Resolution: Won't Fix JMX values for RPC Activity is always zero -- Key: HDFS-2244 URL: https://issues.apache.org/jira/browse/HDFS-2244 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.20.204.0 Reporter: Allen Wittenauer Priority: Blocker Attachments: screenshot-1.jpg jconsole is showing that the RPC metrics gathered for the datanode is always zero. Other metrics for the DN appears fine. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-2275) Bouncing the namenode causes JMX deadnode count to drop to 0
[ https://issues.apache.org/jira/browse/HDFS-2275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-2275. Resolution: Won't Fix Bouncing the namenode causes JMX deadnode count to drop to 0 Key: HDFS-2275 URL: https://issues.apache.org/jira/browse/HDFS-2275 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.20.203.0 Reporter: Allen Wittenauer The namenode JMX metrics and the namenode web UI disagree on the number of dead nodes. See comments. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-1049) utility to list all files less than X replication
[ https://issues.apache.org/jira/browse/HDFS-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-1049. Resolution: Won't Fix Won't fix. Just parse the fsck output that you have to run nightly to fix errors the namenode won't fix on its own anyway. utility to list all files less than X replication - Key: HDFS-1049 URL: https://issues.apache.org/jira/browse/HDFS-1049 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 0.20.2 Reporter: Allen Wittenauer It would be great to have a utility that lists all files that have a replication less than X. While fsck provides this output and it isn't that tricky to parse, it would still be nice if Hadoop had this functionality out of the box. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-964) hdfs-default.xml shouldn't use hadoop.tmp.dir for dfs.data.dir (0.20 and lower) / dfs.datanode.dir (0.21 and up)
[ https://issues.apache.org/jira/browse/HDFS-964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-964: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) hdfs-default.xml shouldn't use hadoop.tmp.dir for dfs.data.dir (0.20 and lower) / dfs.datanode.dir (0.21 and up) Key: HDFS-964 URL: https://issues.apache.org/jira/browse/HDFS-964 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.20.2 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HDFS-964.txt This question/problem pops up all the time. Can we *please* eliminate hadoop.tmp.dir's usage from the default in dfs.data.dir. It is confusing to new people and results in all sorts of weird accidents. If we want the same value, fine, but there are a lot of implied things by the variable re-use. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-870) Topology is permanently cached
[ https://issues.apache.org/jira/browse/HDFS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-870. --- Resolution: Won't Fix Sites should implement their own Java class that duplicates the current script method but implements caching. Closing as won't fix. Topology is permanently cached -- Key: HDFS-870 URL: https://issues.apache.org/jira/browse/HDFS-870 Project: Hadoop HDFS Issue Type: Bug Reporter: Allen Wittenauer Replacing the topology script requires a namenode bounce because the NN caches the information permanently. It should really either expire it periodically or expire on -refreshNodes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-577) Name node doesn't always properly recognize health of data node
[ https://issues.apache.org/jira/browse/HDFS-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-577. --- Resolution: Won't Fix Name node doesn't always properly recognize health of data node --- Key: HDFS-577 URL: https://issues.apache.org/jira/browse/HDFS-577 Project: Hadoop HDFS Issue Type: Bug Reporter: Allen Wittenauer The one-way communication (data node - name node) for node health does not guarantee that the data node is actually healthy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-607) HDFS should support SNMP
[ https://issues.apache.org/jira/browse/HDFS-607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-607. --- Resolution: Won't Fix HDFS should support SNMP Key: HDFS-607 URL: https://issues.apache.org/jira/browse/HDFS-607 Project: Hadoop HDFS Issue Type: New Feature Reporter: Allen Wittenauer HDFS should provide key statistics over a standard protocol such as SNMP. This would allow for much easier integration into common software packages that are already established in the industry. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-580) Name node will exit safe mode w/0 blocks even if data nodes are broken
[ https://issues.apache.org/jira/browse/HDFS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-580. --- Resolution: Won't Fix Name node will exit safe mode w/0 blocks even if data nodes are broken -- Key: HDFS-580 URL: https://issues.apache.org/jira/browse/HDFS-580 Project: Hadoop HDFS Issue Type: Bug Reporter: Allen Wittenauer If one brings up a freshly formatted name node against older data nodes with an incompatible storage id (such that the datanodes fail with Directory /mnt/u001/dfs-data is in an inconsistent state: is incompatible with others.), the name node will still come out of safe mode. Writes will partially succeed--entries are created, but all are zero length. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-248) dynamically add/subtract dfs.name.dir directories
[ https://issues.apache.org/jira/browse/HDFS-248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-248. --- Resolution: Won't Fix dynamically add/subtract dfs.name.dir directories - Key: HDFS-248 URL: https://issues.apache.org/jira/browse/HDFS-248 Project: Hadoop HDFS Issue Type: New Feature Reporter: Allen Wittenauer Priority: Minor It would be very beneficial to be able to add and subtract dfs.name.dir entries on the fly. This would be used when one drive fails such that another location could be used to keep image and edits redundancy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir
[ https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-208. --- Resolution: Won't Fix name node should warn if only one dir is listed in dfs.name.dir --- Key: HDFS-208 URL: https://issues.apache.org/jira/browse/HDFS-208 Project: Hadoop HDFS Issue Type: New Feature Components: name-node Reporter: Allen Wittenauer Priority: Minor Labels: newbie Fix For: 0.24.0 The name node should warn that corruption may occur if only one directory is listed in the dfs.name.dir setting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-226) Post users: need admin-only access to HDFS
[ https://issues.apache.org/jira/browse/HDFS-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-226. --- Resolution: Won't Fix Post users: need admin-only access to HDFS --- Key: HDFS-226 URL: https://issues.apache.org/jira/browse/HDFS-226 Project: Hadoop HDFS Issue Type: New Feature Environment: All Reporter: Allen Wittenauer When user support gets added to HDFS, administrators are going to need to be able to set the namenode such that it only allows connections/interactions from the administrative user. This is particularly important after upgrades and for other administrative work that may require the changing of user/group ownership, permissions, location of files within the HDFS, etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-34) The elephant should remember names, not numbers.
[ https://issues.apache.org/jira/browse/HDFS-34?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HDFS-34. -- Resolution: Won't Fix We're working around this by making sure every hostname and IP address given to clients is movable in some form or another, including IP aliases and BGP route propagation techniques. Closing at won't fix. The elephant should remember names, not numbers. Key: HDFS-34 URL: https://issues.apache.org/jira/browse/HDFS-34 Project: Hadoop HDFS Issue Type: Bug Reporter: Allen Wittenauer The name node and the data node should not cache the resolution of host names, as doing so prevents the use of DNS CNAMEs for any sort of fail over capability. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2130: -- Attachment: hdfs-2130.txt Updated patch fixes TestDataTransferProtocol to expect CRC32C as the default checksum. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token
[ https://issues.apache.org/jira/browse/HDFS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash reassigned HDFS-2417: -- Assignee: Ravi Prakash Warnings about attempt to override final parameter while getting delegation token - Key: HDFS-2417 URL: https://issues.apache.org/jira/browse/HDFS-2417 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.20.205.0 Reporter: Rajit Saha Assignee: Ravi Prakash Attachments: HDFS-2417.patch I am seeing whenever I run any Mapreduce job and its trying to acquire delegation from NN, In JT log following warnings coming about a attempt to override final parameter: The log snippet in JT log 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.job.reuse.jvm.num.tasks; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.system.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: hadoop.job.history.user.location; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.local.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apred.job.tracker.http.address; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: d fs.data.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: d fs.http.address; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.admin.map.child.java.opts; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapreduce.history.server.http.address; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.history.server.embedded; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.jobtracker.split.metainfo.maxsize; Ignoring.2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apreduce.admin.reduce.child.java.opts; Ignoring.2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: h adoop.tmp.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.jobtracker.maxtasks.per.job; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: mapred.job.tracker; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: dfs.name.dir; Ignoring. 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override final parameter: m apred.temp.dir; Ignoring.2011-10-07 20:29:19,103 INFO org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering token for renewal for service =NN IP:50470 and jobID = job_201110072015_0005 2011-10-07 20:29:19,103 INFO org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering token for renewal for service =NN IP:8020 and jobID = job_201110072015_0005 The STDOUT of distcp job when these warnings logged into JT log $hadoop distcp hftp://NN:50070/tmp/inp out 11/10/07 20:29:17
[jira] [Reopened] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir
[ https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins reopened HDFS-208: -- Re-opening, seems like an easy, valuable improvement. name node should warn if only one dir is listed in dfs.name.dir --- Key: HDFS-208 URL: https://issues.apache.org/jira/browse/HDFS-208 Project: Hadoop HDFS Issue Type: New Feature Components: name-node Reporter: Allen Wittenauer Priority: Minor Labels: newbie Fix For: 0.24.0 The name node should warn that corruption may occur if only one directory is listed in the dfs.name.dir setting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir
[ https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G reassigned HDFS-208: Assignee: Uma Maheswara Rao G name node should warn if only one dir is listed in dfs.name.dir --- Key: HDFS-208 URL: https://issues.apache.org/jira/browse/HDFS-208 Project: Hadoop HDFS Issue Type: New Feature Components: name-node Reporter: Allen Wittenauer Assignee: Uma Maheswara Rao G Priority: Minor Labels: newbie Fix For: 0.24.0 The name node should warn that corruption may occur if only one directory is listed in the dfs.name.dir setting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2298) TestDfsOverAvroRpc is failing on trunk
[ https://issues.apache.org/jira/browse/HDFS-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142425#comment-13142425 ] Todd Lipcon commented on HDFS-2298: --- This test is timing out on trunk still: 2011-11-02 12:07:36,995 INFO ipc.Server (Server.java:run(1525)) - IPC Server handler 2 on 56601, call: call(org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol, org.apache.hadoop.ipc.AvroRpcEngine$BufferListWritable@7691a4fb), rpc version=2, client version=1, methodsFingerPrint=264883142 from 127.0.0.1:44891, error: java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:615) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152) I'd like to propose disabling this test on trunk as well unless some people plan to make dfs-over-avro a reality. TestDfsOverAvroRpc is failing on trunk -- Key: HDFS-2298 URL: https://issues.apache.org/jira/browse/HDFS-2298 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Aaron T. Myers Assignee: Doug Cutting Fix For: 0.24.0 Attachments: HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch, HDFS-2298.patch The relevant bit of the error: {noformat} --- Test set: org.apache.hadoop.hdfs.TestDfsOverAvroRpc --- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.486 sec FAILURE! testWorkingDirectory(org.apache.hadoop.hdfs.TestDfsOverAvroRpc) Time elapsed: 1.424 sec ERROR! org.apache.avro.AvroTypeException: Two methods with same name: delete {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2532) TestDfsOverAvroRpc timing out in trunk
TestDfsOverAvroRpc timing out in trunk -- Key: HDFS-2532 URL: https://issues.apache.org/jira/browse/HDFS-2532 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Todd Lipcon Priority: Critical java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol occurs while starting up the DN, and then it hangs waiting for the MiniCluster to start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142431#comment-13142431 ] Tsz Wo (Nicholas), SZE commented on HDFS-2316: -- @Alejandro On #3, it will be confusing to users that part of the URL is case sensitive (the path) and part it is not (the query-string). Given that HDFS is case sensitive, making the path case insensitive is not an option. Thus, I'm suggesting to make it all case sensitive and there will be no confusion there. We cannot make it all case sensitive, e.g. scheme and authority are case insensitive. For examples, - [hTTps://issues.apache.org/jira/browse/HDFS-2316] - works - https://iSSues.apache.org/jira/browse/HDFS-2316 - works - https://issues.apache.org/jira/BRowse/HDFS-2316 - does not works webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP -- Key: HDFS-2316 URL: https://issues.apache.org/jira/browse/HDFS-2316 Project: Hadoop HDFS Issue Type: New Feature Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE Attachments: WebHdfsAPI20111020.pdf We current have hftp for accessing HDFS over HTTP. However, hftp is a read-only FileSystem and does not provide write accesses. In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem implementation for accessing HDFS over HTTP. The is the umbrella JIRA for the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142433#comment-13142433 ] Tsz Wo (Nicholas), SZE commented on HDFS-2316: -- - https://issues.apache.org/jira/browse/hdFS-2316 - also works webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP -- Key: HDFS-2316 URL: https://issues.apache.org/jira/browse/HDFS-2316 Project: Hadoop HDFS Issue Type: New Feature Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE Attachments: WebHdfsAPI20111020.pdf We current have hftp for accessing HDFS over HTTP. However, hftp is a read-only FileSystem and does not provide write accesses. In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem implementation for accessing HDFS over HTTP. The is the umbrella JIRA for the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2532) TestDfsOverAvroRpc timing out in trunk
[ https://issues.apache.org/jira/browse/HDFS-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2532: -- Attachment: hdfs-2532-make-timeout.txt here's a patch which doesn't fix the issue, but at least adds a timeout. Unless someone has a fix for the issue ready, I plan to commit this so the timeout stops causing tests lower down the build from failing. TestDfsOverAvroRpc timing out in trunk -- Key: HDFS-2532 URL: https://issues.apache.org/jira/browse/HDFS-2532 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Todd Lipcon Priority: Critical Attachments: hdfs-2532-make-timeout.txt java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol occurs while starting up the DN, and then it hangs waiting for the MiniCluster to start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2532) TestDfsOverAvroRpc timing out in trunk
[ https://issues.apache.org/jira/browse/HDFS-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2532: -- Status: Patch Available (was: Open) TestDfsOverAvroRpc timing out in trunk -- Key: HDFS-2532 URL: https://issues.apache.org/jira/browse/HDFS-2532 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Todd Lipcon Priority: Critical Attachments: hdfs-2532-make-timeout.txt java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol occurs while starting up the DN, and then it hangs waiting for the MiniCluster to start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142438#comment-13142438 ] Eli Collins commented on HDFS-2130: --- Is the problem made simpler by disallowing appenders from using different checksum algorithms than the one used when creating the file? Saving a round trip is nice (iiuc the motivation for allowing mixed checksums), not sure having the ability to switch checksum type and chunk size is a feature we need. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142440#comment-13142440 ] Alejandro Abdelnur commented on HDFS-2316: -- scheme authority are case insensitive by definition. This is well known and expected. However, path query string are not. Regarding your last example, that is JIRA functionality. And illustrates my point, the fact that 'browse' is case sensitive and 'hdfs' is not it will be confusing. webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP -- Key: HDFS-2316 URL: https://issues.apache.org/jira/browse/HDFS-2316 Project: Hadoop HDFS Issue Type: New Feature Reporter: Tsz Wo (Nicholas), SZE Assignee: Tsz Wo (Nicholas), SZE Attachments: WebHdfsAPI20111020.pdf We current have hftp for accessing HDFS over HTTP. However, hftp is a read-only FileSystem and does not provide write accesses. In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem implementation for accessing HDFS over HTTP. The is the umbrella JIRA for the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2532) TestDfsOverAvroRpc timing out in trunk
[ https://issues.apache.org/jira/browse/HDFS-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142443#comment-13142443 ] Doug Cutting commented on HDFS-2532: It would be good to figure out what commit broke this. TestDfsOverAvroRpc timing out in trunk -- Key: HDFS-2532 URL: https://issues.apache.org/jira/browse/HDFS-2532 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Todd Lipcon Priority: Critical Attachments: hdfs-2532-make-timeout.txt java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol occurs while starting up the DN, and then it hangs waiting for the MiniCluster to start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142445#comment-13142445 ] Eli Collins commented on HDFS-2130: --- Never mind, I see from looking at the patch that the DN re-checksums when the client and stored checksums differ so we don't have mixed on-disk checksums. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142459#comment-13142459 ] Todd Lipcon commented on HDFS-2130: --- Right. The patch as attached now implements #2 - the DN recalculates to match the on-disk checksum format. It only supports switching checksum algorithm -- number of checksum bytes has to be the same, since that determines packet sizes, etc, and would be a more complicated patch. Nothing stopping us from addressing it later, but it's not the pressing need. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2129: -- Attachment: hdfs-2129.txt Updated patch fixes failing tests. Also added unit tests for DirectBufferPool. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142476#comment-13142476 ] Eli Collins commented on HDFS-2130: --- After paging in the relevant info I think approach #2 makes sense. I don't think we'll need to change the chunk size and the slight performance hit is only when filling up the last block of files being appended that were created in previous releases (they can always switch the default if they really care, but they shouldn't). +1 to the latest patch Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Kelly updated HDFS-1580: - Attachment: HDFS-1580.diff Thanks for the review Todd. I've uploaded a new patch which addresses your comments. Add interface for generic Write Ahead Logging mechanisms Key: HDFS-1580 URL: https://issues.apache.org/jira/browse/HDFS-1580 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ivan Kelly Assignee: Jitendra Nath Pandey Fix For: HA branch (HDFS-1623), 0.24.0 Attachments: EditlogInterface.1.pdf, EditlogInterface.2.pdf, EditlogInterface.3.pdf, HDFS-1580+1521.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2533: -- Attachment: hdfs-2533.txt Here's the simple patch. The reason this is correct is follows: - getBlockFile() doesn't itself access any in-memory structures. It calls validateBlockFile - validateBlockFile is unsynchronized. It also doesn't access any structures. It calls getFile() which _is_ synchronized (since it accesses in-memory state). Then it calls f.exists(). - it's safe to call f.exists() outside the lock because, even if we had the lock, someone could remove the file just after we exit this function. Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2533: -- Status: Patch Available (was: Open) Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly
[ https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142508#comment-13142508 ] Todd Lipcon commented on HDFS-1172: --- I'm worried that there are some other bugs lurking here -- ie the fact that our test coverage doesn't check this means that our understanding of the state of the world is somehow broken. So I'm hesitant to commit a change here until we really understand what's going on. If some other folks who know this area of the code well can take a look, I'd be more inclined to commit for 23. Blocks in newly completed files are considered under-replicated too quickly --- Key: HDFS-1172 URL: https://issues.apache.org/jira/browse/HDFS-1172 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.21.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: HDFS-1172.patch, hdfs-1172.txt, replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch I've seen this for a long time, and imagine it's a known issue, but couldn't find an existing JIRA. It often happens that we see the NN schedule replication on the last block of files very quickly after they're completed, before the other DNs in the pipeline have a chance to report the new block. This results in a lot of extra replication work on the cluster, as we replicate the block and then end up with multiple excess replicas which are very quickly deleted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2525) Race between BlockPoolSliceScanner and append
[ https://issues.apache.org/jira/browse/HDFS-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142512#comment-13142512 ] Eli Collins commented on HDFS-2525: --- This patch should also update TestAppendDifferentChecksums to remove the disabling of the block scanner which was introduced in HDFS-2130 (see the comment in the test for details). Race between BlockPoolSliceScanner and append - Key: HDFS-2525 URL: https://issues.apache.org/jira/browse/HDFS-2525 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Priority: Critical I wrote a test which runs append() in a loop on a single file with a single replica, appending 0~100 bytes each time. If this races with the BlockPoolSliceScanner, I observe the BlockPoolSliceScanner getting FNFE, then reporting the block as bad to the NN. This causes the writer thread to loop forever on completeFile() since it doesn't see a valid replica. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142519#comment-13142519 ] Hadoop QA commented on HDFS-2130: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12501994/hdfs-2130.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 14 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestAbandonBlock +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1521//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/1521//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1521//console This message is automatically generated. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142522#comment-13142522 ] Eli Collins commented on HDFS-2533: --- +1 Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2129: -- Attachment: hdfs-2129.txt A little more improvement to make the DirectBufferPool non-blocking. I did some testing with TestParallelRead and saw that the naive synchronization was actually getting contended. So, now uses ConcurrentLinkedQueues and a ConcurrentHashMap. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142531#comment-13142531 ] Todd Lipcon commented on HDFS-2129: --- Here are some benchmarks from TestParallelRead, with number of iterations jacked up and 100% random reads (instead of seq/random mix). I did the benchmarks on top of HDFS-2533 since otherwise I was ending up blocked on getBlockFile everywhere. These benchmarks are also on top of HDFS-2130 (CRC32C as default) The middle column is the new code without native libraries available. The third column is with native code available, taking advantage of the SSE4.2 CRC32C implementation in trunk. | Threads | Trunk | HDFS-2533 | HDFS-2533 + HDFS-2129 (nonative) | HDFS-2553 + HDFS-2129 + native | | 4 | 226556 KB/s | 236065 KB/sec (1.04x) | 231979 KB/sec (1.02x) | 285824 KB/sec (1.26x) | | 16 | 377474 KB/s | 454362 KB/sec (1.20x) | 457497 KB/sec (1.20x) | 526224 KB/sec (1.39x) | | 8 | 410114 KB/s | 453107 KB/sec (1.10x) | 447927 KB/sec (1.02x) | 549027 KB/sec (1.33x) | Further gains will come when HDFS-1148 is finished -- the 16-thread test in particular ends up with a lot of contention on the FSDataset lock. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142554#comment-13142554 ] Todd Lipcon commented on HDFS-2129: --- Just for the sake of amusement, results from Apache 0.20.2: |Threads|0.20.2| Total speedup in trunk after patches | | 4 | 106197 KB/sec | 2.69x | | 16 | 246712 KB/sec | 2.13x | | 8 | 186860 KB/sec | 2.93x | Should make HBase folks happy! :) Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2532) TestDfsOverAvroRpc timing out in trunk
[ https://issues.apache.org/jira/browse/HDFS-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142562#comment-13142562 ] Hadoop QA commented on HDFS-2532: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12502015/hdfs-2532-make-timeout.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestDfsOverAvroRpc +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1522//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1522//console This message is automatically generated. TestDfsOverAvroRpc timing out in trunk -- Key: HDFS-2532 URL: https://issues.apache.org/jira/browse/HDFS-2532 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Todd Lipcon Priority: Critical Attachments: hdfs-2532-make-timeout.txt java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol occurs while starting up the DN, and then it hangs waiting for the MiniCluster to start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2532) TestDfsOverAvroRpc timing out in trunk
[ https://issues.apache.org/jira/browse/HDFS-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142579#comment-13142579 ] Doug Cutting commented on HDFS-2532: Binary search of commits points to r1179877 (HDFS-2181). This test works if you update to r1179876 but hangs when you update to r1179877. I can start to look at why. TestDfsOverAvroRpc timing out in trunk -- Key: HDFS-2532 URL: https://issues.apache.org/jira/browse/HDFS-2532 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Todd Lipcon Priority: Critical Attachments: hdfs-2532-make-timeout.txt java.io.IOException: java.io.IOException: Unknown protocol: org.apache.hadoop.ipc.AvroRpcEngine$TunnelProtocol occurs while starting up the DN, and then it hangs waiting for the MiniCluster to start. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey resolved HDFS-2416. Resolution: Fixed Committed. distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142611#comment-13142611 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Mapreduce-0.23-Commit #149 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/149/]) Merged r1196434 and r1196386 from trunk for HADOOP-7792 and HDFS-2416. jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196812 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142618#comment-13142618 ] Jitendra Nath Pandey commented on HDFS-2528: I think it makes sense to make the similar change for hftp also, although hftp doesn't face this issue. Should we create a jira for hftp? +1 for the patch. webhdfs rest call to a secure dn fails when a token is sent --- Key: HDFS-2528 URL: https://issues.apache.org/jira/browse/HDFS-2528 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Tsz Wo (Nicholas), SZE Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, h2528_2002_0.20s.patch curl -L -u : --negotiate -i http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN; the following exception is thrown by the datanode when the redirect happens. {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Call to failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]}} Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142627#comment-13142627 ] stack commented on HDFS-2129: - @me happy Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2533: -- Attachment: hdfs-2533.txt Slightly improved version. I found it was pretty trivial to fix contention in two other places: in these places we were doing a lock around an file.exists() call unnecessarily, since we were about to open the file for read right afterwards. Given that, the exists check is unnecessary - we'll get FileNotFoundException when we try to read the file. With this patch the numbers improve to: | Threads | Trunk | HDFS-2533v2 | | 4 | 226556 KB/s | 237805 KB/sec (1.05x) | | 16 | 377474 KB/s | 499399 KB/sec (1.32x) | | 8 | 410114 KB/s | 474560 KB/sec (1.15x) | Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142648#comment-13142648 ] Todd Lipcon commented on HDFS-2129: --- On the v2 patch on HDFS-2533, the numbers improve to: | Threads | Trunk | HDFS-2533v2 | HDFS-2533 + HDFS-2129 (nonative) | HDFS-2533 + HDFS-2129 + native | | 4 | 226556 KB/s | 237805 KB/sec (1.05x) | 252639 KB/sec (1.12x) | 298635 KB/sec (1.31x) | | 16 | 377474 KB/s | 499399 KB/sec (1.32x) | 487537 KB/sec (1.29x) | 634531 KB/sec (1.68x) | | 8 | 410114 KB/s | 474560 KB/sec (1.15x) | 476530 KB/sec (1.16x) | 632547 KB/sec (1.54x) | (8-threaded case now 3.39x vs 0.20.2!) Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2534) Remove RemoteBlockReader and rename RemoteBlockReader2
Remove RemoteBlockReader and rename RemoteBlockReader2 -- Key: HDFS-2534 URL: https://issues.apache.org/jira/browse/HDFS-2534 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.24.0 Reporter: Eli Collins HDFS-2129 introduced a new BlockReader implementation and preserved the old that that can be selected via a config option as a fallback in 23. For 24 let's remove RemoteBlockReader and rename RemoteBlockReader2, and remove the config option. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142656#comment-13142656 ] Hadoop QA commented on HDFS-2129: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12502036/hdfs-2129.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 23 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBalancerBandwidth +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1524//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/1524//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1524//console This message is automatically generated. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142651#comment-13142651 ] Eli Collins commented on HDFS-2129: --- @Todd, Spectacular! Patch looks good - comments follow.. otherwise +1 * Make sense to remove BlockReader#read() while we're at it? * Nit, the TODO in verifyPacketChecksums should be an NB right, since we want to preserve the old behavior (realize this was copied so can change in both places) * DBP#countBuffersOfSize doesn't need to have the side effect of removing buffers * DBP#returnBuffer needs a javadoc * Not your change, but please put the return at TestConnCache line 90 on its own line. I filed HDFS-2534 to remove the old block reader and config in 24 so we have the fallback for 23 but don't maintain two copies forever. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2533: -- Component/s: performance Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142673#comment-13142673 ] Hadoop QA commented on HDFS-2533: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12502034/hdfs-2533.txt against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestAbandonBlock +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1523//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1523//console This message is automatically generated. Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2130: -- Attachment: hdfs-2130.txt Fix for the findbugs warning. There was a needless null check on {{streams}} in {{BlockReceiver}}. Both of the implementations for {{createStreams}} always either return a non-null object or throw an exception. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142679#comment-13142679 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Hdfs-0.23-Commit #139 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/139/]) Merged r1196434 and r1196386 from trunk for HADOOP-7792 and HDFS-2416. jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196812 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142681#comment-13142681 ] Hudson commented on HDFS-2416: -- Integrated in Hadoop-Common-0.23-Commit #138 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/138/]) Merged r1196434 and r1196386 from trunk for HADOOP-7792 and HDFS-2416. jitendra : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196812 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java distcp with a webhdfs uri on a secure cluster fails --- Key: HDFS-2416 URL: https://issues.apache.org/jira/browse/HDFS-2416 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 0.20.205.0 Reporter: Arpit Gupta Assignee: Jitendra Nath Pandey Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 Attachments: HDFS-2416-branch-0.20-security.6.patch, HDFS-2416-branch-0.20-security.7.patch, HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142693#comment-13142693 ] Eli Collins commented on HDFS-2130: --- Update looks good. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142698#comment-13142698 ] Todd Lipcon commented on HDFS-2533: --- Test results are due to TestDfsOverAvroRpc timeout. The lack of tests are because these code paths are covered by many other tests (every test that reads data!) Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2129: -- Attachment: hdfs-2129.txt Attached patch addresses the findbugs warning (finalize() method should be protected) and also addresses all of Eli's points above. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142713#comment-13142713 ] Eli Collins commented on HDFS-2129: --- +1 latest patch looks great Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2129) Simplify BlockReader to not inherit from FSInputChecker
[ https://issues.apache.org/jira/browse/HDFS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142721#comment-13142721 ] Todd Lipcon commented on HDFS-2129: --- Thanks. Will commit pending Hudson results on the latest. Simplify BlockReader to not inherit from FSInputChecker --- Key: HDFS-2129 URL: https://issues.apache.org/jira/browse/HDFS-2129 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client, performance Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0 Attachments: hdfs-2129-benchmark.png, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, hdfs-2129.txt, seq-read-1gb-bench.png BlockReader is currently quite complicated since it has to conform to the FSInputChecker inheritance structure. It would be much simpler to implement it standalone. Benchmarking indicates it's slightly faster, as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2130: -- Resolution: Fixed Fix Version/s: 0.23.1 0.24.0 Release Note: The default checksum algorithm used on HDFS is now CRC32C. Data from previous versions of Hadoop can still be read backwards-compatibly. Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to branch-0.23 for 0.23.1 (since .0 branched already). Committed to trunk. Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142736#comment-13142736 ] Hadoop QA commented on HDFS-2533: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12502057/hdfs-2533.txt against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBalancerBandwidth +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1526//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1526//console This message is automatically generated. Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-2533: -- Resolution: Fixed Fix Version/s: 0.23.1 0.24.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Fixed in branch-0.23 for 0.23.1. Fixed in trunk for 0.24. Thanks for the reviews! Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142745#comment-13142745 ] Hudson commented on HDFS-2130: -- Integrated in Hadoop-Mapreduce-0.23-Commit #150 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/150/]) HDFS-2130. Switch default checksum to CRC32C. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196888 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendDifferentChecksum.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142746#comment-13142746 ] Hudson commented on HDFS-2130: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1259 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1259/]) HDFS-2130. Switch default checksum to CRC32C. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196889 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendDifferentChecksum.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142758#comment-13142758 ] Hudson commented on HDFS-2533: -- Integrated in Hadoop-Common-0.23-Commit #139 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/139/]) HDFS-2533. Remove needless synchronization on some FSDataSet methods. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196901 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142759#comment-13142759 ] Hudson commented on HDFS-2130: -- Integrated in Hadoop-Common-0.23-Commit #139 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/139/]) HDFS-2130. Switch default checksum to CRC32C. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196888 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendDifferentChecksum.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142762#comment-13142762 ] Hudson commented on HDFS-2130: -- Integrated in Hadoop-Hdfs-0.23-Commit #140 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/140/]) HDFS-2130. Switch default checksum to CRC32C. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196888 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendDifferentChecksum.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2533) Remove needless synchronization on FSDataSet.getBlockFile
[ https://issues.apache.org/jira/browse/HDFS-2533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142763#comment-13142763 ] Hudson commented on HDFS-2533: -- Integrated in Hadoop-Common-trunk-Commit #1237 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1237/]) HDFS-2533. Remove needless synchronization on some FSDataSet methods. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196902 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java Remove needless synchronization on FSDataSet.getBlockFile - Key: HDFS-2533 URL: https://issues.apache.org/jira/browse/HDFS-2533 Project: Hadoop HDFS Issue Type: Improvement Components: data-node, performance Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2533.txt, hdfs-2533.txt HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2130) Switch default checksum to CRC32C
[ https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13142764#comment-13142764 ] Hudson commented on HDFS-2130: -- Integrated in Hadoop-Common-trunk-Commit #1237 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1237/]) HDFS-2130. Switch default checksum to CRC32C. Contributed by Todd Lipcon. todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1196889 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendDifferentChecksum.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java Switch default checksum to CRC32C - Key: HDFS-2130 URL: https://issues.apache.org/jira/browse/HDFS-2130 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs client Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.24.0, 0.23.1 Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a much more efficient checksum algorithm than CRC32. Hence we should change the default checksum to CRC32C. However, in order to continue to support append against blocks created with the old checksum, we will need to implement some kind of handshaking in the write pipeline. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira