[jira] [Updated] (HDFS-2494) [webhdfs] When Getting the file using OP=OPEN with DN http address, ESTABLISHED sockets are growing.
[ https://issues.apache.org/jira/browse/HDFS-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-2494: -- Description: As part of the reliable test, Scenario: Initially check the socket count. ---there are aroud 42 sockets are there. open the file with DataNode http address using op=OPEN request parameter about 500 times in loop. Wait for some time and check the socket count. --- There are thousands of ESTABLISHED sockets are growing. ~2052 Here is the netstat result: C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 This count is not coming down. was: As part of the reliable test, Scenario: Initially check the socket count. ---there are aroud 42 sockets are there. open the file with DataNode http address using op=OPEN request parameter about 500 times in loop. Wait for some time and check the socket count. --- There are thousands of ESTABLISHED sockets are growing. ~2052 Here is the netstat result: C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 This count is coming down. > [webhdfs] When Getting the file using OP=OPEN with DN http address, > ESTABLISHED sockets are growing. > > > Key: HDFS-2494 > URL: https://issues.apache.org/jira/browse/HDFS-2494 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.24.0 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > > As part of the reliable test, > Scenario: > Initially check the socket count. ---there are aroud 42 sockets are there. > open the file with DataNode http address using op=OPEN request parameter > about 500 times in loop. > Wait for some time and check the socket count. --- There are thousands of > ESTABLISHED sockets are growing. ~2052 > Here is the netstat result: > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l > 2042 > This count is not coming down. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2489) Move commands Finalize and Register out of DatanodeCommand class.
[ https://issues.apache.org/jira/browse/HDFS-2489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133812#comment-13133812 ] Hadoop QA commented on HDFS-2489: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12500387/HDFS-2489.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBalancerBandwidth org.apache.hadoop.hdfs.TestRestartDFS org.apache.hadoop.hdfs.TestDistributedFileSystem org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1424//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1424//console This message is automatically generated. > Move commands Finalize and Register out of DatanodeCommand class. > - > > Key: HDFS-2489 > URL: https://issues.apache.org/jira/browse/HDFS-2489 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 0.23.0, 0.24.0 > Environment: There are other subclasses available in separate files. > These commands should be moved to separate files as well. >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-2489.patch, HDFS-2489.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2488) Separate datatypes for InterDatanodeProtocol
[ https://issues.apache.org/jira/browse/HDFS-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-2488: -- Attachment: HDFS-2488.txt Reverted unnecessary method ReplicasState#valueOf() and using ReplicaState#getState(). > Separate datatypes for InterDatanodeProtocol > > > Key: HDFS-2488 > URL: https://issues.apache.org/jira/browse/HDFS-2488 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-2488.txt, HDFS-2488.txt > > > This jira separates for InterDatanodeProtocol the wire types from the types > used by the client and server, similar to HDFS-2181. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133797#comment-13133797 ] Sanjay Radia commented on HDFS-2316: Versioning: We were going with a previous suggestion to add a version parameter when we go to the next version. > webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP > -- > > Key: HDFS-2316 > URL: https://issues.apache.org/jira/browse/HDFS-2316 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: WebHdfsAPI20111020.pdf > > > We current have hftp for accessing HDFS over HTTP. However, hftp is a > read-only FileSystem and does not provide "write" accesses. > In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem > implementation for accessing HDFS over HTTP. The is the umbrella JIRA for > the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133793#comment-13133793 ] Sanjay Radia commented on HDFS-2316: > ... embedding byte-ranges in the URL itself. This was the implementation a few days ago, It was changed to use content range header - fairly standard and it likely allow other tools to works seamlessly. > webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP > -- > > Key: HDFS-2316 > URL: https://issues.apache.org/jira/browse/HDFS-2316 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: WebHdfsAPI20111020.pdf > > > We current have hftp for accessing HDFS over HTTP. However, hftp is a > read-only FileSystem and does not provide "write" accesses. > In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem > implementation for accessing HDFS over HTTP. The is the umbrella JIRA for > the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2489) Move commands Finalize and Register out of DatanodeCommand class.
[ https://issues.apache.org/jira/browse/HDFS-2489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-2489: -- Attachment: HDFS-2489.patch Addressed Jitendra's comments. > Move commands Finalize and Register out of DatanodeCommand class. > - > > Key: HDFS-2489 > URL: https://issues.apache.org/jira/browse/HDFS-2489 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 0.23.0, 0.24.0 > Environment: There are other subclasses available in separate files. > These commands should be moved to separate files as well. >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-2489.patch, HDFS-2489.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133786#comment-13133786 ] Hadoop QA commented on HDFS-2477: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12500379/reportDiff.patch-3 against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.server.datanode.TestMulitipleNNDataBlockScanner org.apache.hadoop.hdfs.TestBalancerBandwidth org.apache.hadoop.hdfs.TestRestartDFS org.apache.hadoop.hdfs.TestDistributedFileSystem org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1423//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1423//console This message is automatically generated. > Optimize computing the diff between a block report and the namenode state. > -- > > Key: HDFS-2477 > URL: https://issues.apache.org/jira/browse/HDFS-2477 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: name-node >Reporter: Tomasz Nykiel >Assignee: Tomasz Nykiel > Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3 > > > When a block report is processed at the NN, the BlockManager.reportDiff > traverses all blocks contained in the report, and for each one block, which > is also present in the corresponding datanode descriptor, the block is moved > to the head of the list of the blocks in this datanode descriptor. > With HDFS-395 the huge majority of the blocks in the report, are also present > in the datanode descriptor, which means that almost every block in the report > will have to be moved to the head of the list. > Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, > which removes a block from a list and then inserts it. In this process, we > call findDatanode several times (afair 6 times for each moveBlockToHead > call). findDatanode is relatively expensive, since it linearly goes through > the triplets to locate the given datanode. > With this patch, we do some memoization of findDatanode, so we can reclaim 2 > findDatanode calls. Our experiments show that this can improve the reportDiff > (which is executed under write lock) by around 15%. Currently with HDFS-395, > reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2362) More Improvements on NameNode Scalability
[ https://issues.apache.org/jira/browse/HDFS-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133781#comment-13133781 ] Tomasz Nykiel commented on HDFS-2362: - I have a general question regarding the JUnit test. I observed a bizarre behaviour. When running some tests, they fail due to: Cannot lock storage /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1. The directory is already locked - so the MiniHDFSCluster cannot initialize properly. I observed on my local machine that sometimes, probably after running some previous tests which fail, the datanode data directory is left in some strange state: For instance: hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1 listing shows that the files inside (e.g., "current", "in_use.lock" - as far as I remember the name) are listed with "?" for all permissions, also the file owner and group are shown to be "?". I am not sure why this thing is happening, I don't think that this is an issue with any of my patches, as for example org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool.testDfsAdminDeleteBlockPool was failing due to this reason previously. > More Improvements on NameNode Scalability > - > > Key: HDFS-2362 > URL: https://issues.apache.org/jira/browse/HDFS-2362 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node >Reporter: Hairong Kuang > > This jira acts as an umbrella jira to track all the improvements we've done > recently to improve Namenode's performance, responsiveness, and hence > scalability. Those improvements include: > 1. Incremental block reports (HDFS-395) > 2. BlockManager.reportDiff optimization for processing block reports > (HDFS-2477) > 3. Upgradable lock to allow simutaleous read operation while reportDiff is in > progress in processing block reports (HDFS-2490) > 4. More CPU efficient data structure for > under-replicated/over-replicated/invalidate blocks (HDFS-2476) > 5. Increase granularity of write operations in ReplicationMonitor thus > reducing contention for write lock > 6. Support variable block sizes > 7. Release RPC handlers while waiting for edit log is synced to disk > 8. Reduce network traffic pressure to the master rack where NN is located by > lowering read priority of the replicas on the rack > 9. A standalone KeepAlive heartbeat thread > 10. Reduce Multiple traversals of path directory to one for most namespace > manipulations > 11. Move logging out of write lock section. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomasz Nykiel updated HDFS-2477: Attachment: reportDiff.patch-3 > Optimize computing the diff between a block report and the namenode state. > -- > > Key: HDFS-2477 > URL: https://issues.apache.org/jira/browse/HDFS-2477 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: name-node >Reporter: Tomasz Nykiel >Assignee: Tomasz Nykiel > Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3 > > > When a block report is processed at the NN, the BlockManager.reportDiff > traverses all blocks contained in the report, and for each one block, which > is also present in the corresponding datanode descriptor, the block is moved > to the head of the list of the blocks in this datanode descriptor. > With HDFS-395 the huge majority of the blocks in the report, are also present > in the datanode descriptor, which means that almost every block in the report > will have to be moved to the head of the list. > Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, > which removes a block from a list and then inserts it. In this process, we > call findDatanode several times (afair 6 times for each moveBlockToHead > call). findDatanode is relatively expensive, since it linearly goes through > the triplets to locate the given datanode. > With this patch, we do some memoization of findDatanode, so we can reclaim 2 > findDatanode calls. Our experiments show that this can improve the reportDiff > (which is executed under write lock) by around 15%. Currently with HDFS-395, > reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2477) Optimize computing the diff between a block report and the namenode state.
[ https://issues.apache.org/jira/browse/HDFS-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomasz Nykiel updated HDFS-2477: Status: Patch Available (was: Open) > Optimize computing the diff between a block report and the namenode state. > -- > > Key: HDFS-2477 > URL: https://issues.apache.org/jira/browse/HDFS-2477 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: name-node >Reporter: Tomasz Nykiel >Assignee: Tomasz Nykiel > Attachments: reportDiff.patch, reportDiff.patch-2, reportDiff.patch-3 > > > When a block report is processed at the NN, the BlockManager.reportDiff > traverses all blocks contained in the report, and for each one block, which > is also present in the corresponding datanode descriptor, the block is moved > to the head of the list of the blocks in this datanode descriptor. > With HDFS-395 the huge majority of the blocks in the report, are also present > in the datanode descriptor, which means that almost every block in the report > will have to be moved to the head of the list. > Currently this operation is performed by DatanodeDescriptor.moveBlockToHead, > which removes a block from a list and then inserts it. In this process, we > call findDatanode several times (afair 6 times for each moveBlockToHead > call). findDatanode is relatively expensive, since it linearly goes through > the triplets to locate the given datanode. > With this patch, we do some memoization of findDatanode, so we can reclaim 2 > findDatanode calls. Our experiments show that this can improve the reportDiff > (which is executed under write lock) by around 15%. Currently with HDFS-395, > reportDiff is responsible for almost 100% of the block report processing time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133743#comment-13133743 ] Hudson commented on HDFS-2452: -- Integrated in Hadoop-Mapreduce-0.23-Commit #43 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/43/]) HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. Contributed by Uma Maheswara Rao. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187969 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataXceiverServer.java > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.23.0, 0.24.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133742#comment-13133742 ] Hudson commented on HDFS-2452: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1155 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1155/]) HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. Contributed by Uma Maheswara Rao. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187965 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataXceiverServer.java > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.23.0, 0.24.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133733#comment-13133733 ] Hudson commented on HDFS-2452: -- Integrated in Hadoop-Common-0.23-Commit #43 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/43/]) HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. Contributed by Uma Maheswara Rao. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187969 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataXceiverServer.java > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.23.0, 0.24.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133734#comment-13133734 ] Hudson commented on HDFS-2452: -- Integrated in Hadoop-Hdfs-0.23-Commit #44 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/44/]) HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. Contributed by Uma Maheswara Rao. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187969 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataXceiverServer.java > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.23.0, 0.24.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-2452: -- Fix Version/s: 0.24.0 0.23.0 > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.23.0, 0.24.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133731#comment-13133731 ] Konstantin Shvachko commented on HDFS-2452: --- I just committed this to trun and branch 0.23. Thank you Uma. > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133726#comment-13133726 ] Hudson commented on HDFS-2452: -- Integrated in Hadoop-Common-trunk-Commit #1140 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1140/]) HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. Contributed by Uma Maheswara Rao. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187965 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataXceiverServer.java > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode
[ https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133724#comment-13133724 ] Hudson commented on HDFS-2452: -- Integrated in Hadoop-Hdfs-trunk-Commit #1218 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1218/]) HDFS-2452. OutOfMemoryError in DataXceiverServer takes down the DataNode. Contributed by Uma Maheswara Rao. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187965 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataXceiverAspects.aj * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataXceiverServer.java > OutOfMemoryError in DataXceiverServer takes down the DataNode > - > > Key: HDFS-2452 > URL: https://issues.apache.org/jira/browse/HDFS-2452 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Konstantin Shvachko >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0 > > Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, > HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, > HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, > HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch > > > OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn > a new data transfer thread. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133686#comment-13133686 ] Hadoop QA commented on HDFS-1580: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12500367/HDFS-1580.diff against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBalancerBandwidth org.apache.hadoop.hdfs.TestRestartDFS org.apache.hadoop.hdfs.TestDistributedFileSystem org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1422//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1422//console This message is automatically generated. > Add interface for generic Write Ahead Logging mechanisms > > > Key: HDFS-1580 > URL: https://issues.apache.org/jira/browse/HDFS-1580 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ivan Kelly >Assignee: Jitendra Nath Pandey > Fix For: HA branch (HDFS-1623), 0.24.0 > > Attachments: EditlogInterface.1.pdf, EditlogInterface.2.pdf, > EditlogInterface.3.pdf, HDFS-1580+1521.diff, HDFS-1580.diff, HDFS-1580.diff, > HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, generic_wal_iface.pdf, > generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.txt > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2494) [webhdfs] When Getting the file using OP=OPEN with DN http address, ESTABLISHED sockets are growing.
[webhdfs] When Getting the file using OP=OPEN with DN http address, ESTABLISHED sockets are growing. Key: HDFS-2494 URL: https://issues.apache.org/jira/browse/HDFS-2494 Project: Hadoop HDFS Issue Type: Bug Components: data-node Affects Versions: 0.24.0 Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G As part of the reliable test, Scenario: Initially check the socket count. ---there are aroud 42 sockets are there. open the file with DataNode http address using op=OPEN request parameter about 500 times in loop. Wait for some time and check the socket count. --- There are thousands of ESTABLISHED sockets are growing. ~2052 Here is the netstat result: C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 C:\Users\uma>netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l 2042 This count is coming down. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-1580) Add interface for generic Write Ahead Logging mechanisms
[ https://issues.apache.org/jira/browse/HDFS-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Kelly updated HDFS-1580: - Attachment: HDFS-1580.diff Removed noop opcode, which I had introduced for testing another thing. Shouldn't have been in this patch. > Add interface for generic Write Ahead Logging mechanisms > > > Key: HDFS-1580 > URL: https://issues.apache.org/jira/browse/HDFS-1580 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ivan Kelly >Assignee: Jitendra Nath Pandey > Fix For: HA branch (HDFS-1623), 0.24.0 > > Attachments: EditlogInterface.1.pdf, EditlogInterface.2.pdf, > EditlogInterface.3.pdf, HDFS-1580+1521.diff, HDFS-1580.diff, HDFS-1580.diff, > HDFS-1580.diff, HDFS-1580.diff, HDFS-1580.diff, generic_wal_iface.pdf, > generic_wal_iface.pdf, generic_wal_iface.pdf, generic_wal_iface.txt > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133658#comment-13133658 ] Hudson commented on HDFS-2485: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1154 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1154/]) HDFS-2485 HDFS-2485 stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187888 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187887 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133657#comment-13133657 ] Hudson commented on HDFS-2485: -- Integrated in Hadoop-Mapreduce-0.23-Commit #42 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/42/]) HDFS-2485 stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187889 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133653#comment-13133653 ] Hudson commented on HDFS-2485: -- Integrated in Hadoop-Hdfs-0.23-Commit #43 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/43/]) HDFS-2485 stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187889 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133651#comment-13133651 ] Hudson commented on HDFS-2485: -- Integrated in Hadoop-Common-0.23-Commit #42 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/42/]) HDFS-2485 stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187889 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133648#comment-13133648 ] Hudson commented on HDFS-2485: -- Integrated in Hadoop-Hdfs-trunk-Commit #1217 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1217/]) HDFS-2485 HDFS-2485 stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187888 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187887 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133649#comment-13133649 ] Hudson commented on HDFS-2485: -- Integrated in Hadoop-Common-trunk-Commit #1139 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1139/]) HDFS-2485 HDFS-2485 stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187888 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java stevel : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187887 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-2485: - Resolution: Fixed Fix Version/s: 0.24.0 0.23.0 Target Version/s: 0.23.0, 0.24.0 (was: 0.24.0, 0.23.0) Status: Resolved (was: Patch Available) > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2485) Improve code layout and constants in UnderReplicatedBlocks
[ https://issues.apache.org/jira/browse/HDFS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133645#comment-13133645 ] Steve Loughran commented on HDFS-2485: -- patch applied to trunk and 0.23 > Improve code layout and constants in UnderReplicatedBlocks > -- > > Key: HDFS-2485 > URL: https://issues.apache.org/jira/browse/HDFS-2485 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node >Affects Versions: 0.23.0, 0.24.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.23.0, 0.24.0 > > Attachments: HDFS-2485-improve-underreplicated.patch, > HDFS-2485-improve-underreplicated.patch > > Original Estimate: 0.5h > Time Spent: 1h > Remaining Estimate: 0h > > Before starting HDFS-2472 I want to clean up the code in > UnderReplicatedBlocks slightly > # use constants for all the string levels > # change the {{getUnderReplicatedBlockCount()}} method so that it works even > if the corrupted block list is not the last queue > # improve the javadocs > # add some more curly braces and spaces to follow the style guidelines better > This is a trivial change as behaviour will not change at all. If committed it > will go into trunk and 0.23 so that patches between the two versions are easy > to apply -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2491) TestBalancer can fail when datanode utilization and avgUtilization is exactly same.
[ https://issues.apache.org/jira/browse/HDFS-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133635#comment-13133635 ] Hudson commented on HDFS-2491: -- Integrated in Hadoop-Mapreduce-trunk #869 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/869/]) HDFS-2491. TestBalancer can fail when datanode utilization and avgUtilization is exactly same. Contributed by Uma Maheswara Rao G. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187837 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java > TestBalancer can fail when datanode utilization and avgUtilization is exactly > same. > --- > > Key: HDFS-2491 > URL: https://issues.apache.org/jira/browse/HDFS-2491 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.24.0 > > Attachments: HDFS-2492-22Branch.patch, HDFS-2492.patch > > > Stack Trace: > junit.framework.AssertionFailedError: 127.0.0.1:60986is not an underUtilized > node: utilization=22.0 avgUtilization=22.0 threshold=10.0 > at > org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1014) > at > org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953) > at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1502) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.twoNodeTest(TestBalancer.java:312) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10ou(TestBalancer.java:328) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2491) TestBalancer can fail when datanode utilization and avgUtilization is exactly same.
[ https://issues.apache.org/jira/browse/HDFS-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13133629#comment-13133629 ] Hudson commented on HDFS-2491: -- Integrated in Hadoop-Hdfs-22-branch #102 (See [https://builds.apache.org/job/Hadoop-Hdfs-22-branch/102/]) HDFS-2491. TestBalancer can fail when datanode utilization and avgUtilization is exactly same. Contributed by Uma Maheswara Rao G. shv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1187838 Files : * /hadoop/common/branches/branch-0.22/hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.22/hdfs/src/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java > TestBalancer can fail when datanode utilization and avgUtilization is exactly > same. > --- > > Key: HDFS-2491 > URL: https://issues.apache.org/jira/browse/HDFS-2491 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.22.0, 0.23.0, 0.24.0 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Fix For: 0.22.0, 0.24.0 > > Attachments: HDFS-2492-22Branch.patch, HDFS-2492.patch > > > Stack Trace: > junit.framework.AssertionFailedError: 127.0.0.1:60986is not an underUtilized > node: utilization=22.0 avgUtilization=22.0 threshold=10.0 > at > org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1014) > at > org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953) > at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1502) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.twoNodeTest(TestBalancer.java:312) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10ou(TestBalancer.java:328) > at > org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira