[jira] [Updated] (HDFS-12570) [SPS]: Refactor Co-ordinator datanode logic to track the block storage movements
[ https://issues.apache.org/jira/browse/HDFS-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-12570: Attachment: HDFS-12570-HDFS-10285-03.patch > [SPS]: Refactor Co-ordinator datanode logic to track the block storage > movements > > > Key: HDFS-12570 > URL: https://issues.apache.org/jira/browse/HDFS-12570 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-12570-HDFS-10285-00.patch, > HDFS-12570-HDFS-10285-01.patch, HDFS-12570-HDFS-10285-02.patch, > HDFS-12570-HDFS-10285-03.patch > > > This task is to refactor the C-DN block storage movements. Basically, the > idea is to move the scheduling and tracking logic to Namenode rather than at > the special C-DN. Please refer the discussion with [~andrew.wang] to > understand the [background and the necessity of > refactoring|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16141060=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16141060]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12642) Log block and datanode details in BlockRecoveryWorker
[ https://issues.apache.org/jira/browse/HDFS-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12642: - Attachment: HDFS-12642.01.patch Patch 1 to add the logs. The added overhead / line counts of logging should be ignorable because recovery does not happen as often as other operations on the DN. Here is an example of what it looks like from a unit test run: {noformat} 2017-10-11 14:18:00,803 INFO datanode.DataNode (BlockRecoveryWorker.java:logRecoverBlock(324)) - BlockRecoveryWorker:NameNode at localhost/127.0.0.1:54426 calls recoverBlock(BP-50205560-10.0.0.51-1507756664863:blk_1073741825_1007, targets=[DatanodeInfoWithStorage[127.0.0.1:54486,null,null], DatanodeInfoWithStorage[127.0.0.1:54442,null,null], DatanodeInfoWithStorage[127.0.0.1:54433,null,null]], newGenerationStamp=1008) 2017-10-11 14:18:01,149 INFO datanode.DataNode (BlockRecoveryWorker.java:syncBlock(184)) - BlockRecoveryWorker: block=BP-50205560-10.0.0.51-1507756664863:blk_1073741825_1007, (length=100), syncList=[block:blk_1073741825_1007[numBytes=100,originalReplicaState=RBW] node:DatanodeInfoWithStorage[127.0.0.1:54486,null,null], block:blk_1073741825_1007[numBytes=100,originalReplicaState=RBW] node:DatanodeInfoWithStorage[127.0.0.1:54442,null,null], block:blk_1073741825_1007[numBytes=100,originalReplicaState=RBW] node:DatanodeInfoWithStorage[127.0.0.1:54433,null,null]] 2017-10-11 14:18:01,150 INFO datanode.DataNode (BlockRecoveryWorker.java:syncBlock(271)) - BlockRecoveryWorker: block=BP-50205560-10.0.0.51-1507756664863:blk_1073741825_1007, (length=100), participatingList=[block:blk_1073741825_1007[numBytes=100,originalReplicaState=RBW] node:DatanodeInfoWithStorage[127.0.0.1:54486,null,null], block:blk_1073741825_1007[numBytes=100,originalReplicaState=RBW] node:DatanodeInfoWithStorage[127.0.0.1:54442,null,null], block:blk_1073741825_1007[numBytes=100,originalReplicaState=RBW] node:DatanodeInfoWithStorage[127.0.0.1:54433,null,null]] {noformat} > Log block and datanode details in BlockRecoveryWorker > - > > Key: HDFS-12642 > URL: https://issues.apache.org/jira/browse/HDFS-12642 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12642.01.patch > > > In a recent investigation, we have seen a weird block recovery issue, which > is difficult to reach to a conclusion because of insufficient logs. > For the most critical part of the events, we see block recovery failed to > {{commitBlockSynchronization]} on the NN, due to the block not closed. This > leaves the file as open forever (for 1+ months). > The reason the block was not closed on NN, was because it is configured with > {{dfs.namenode.replication.min}} =2, and only 1 replica was with the latest > genstamp. > We were not able to tell why only 1 replica is on latest genstamp. > From the primary node of the recovery (ps2204), {{initReplicaRecoveryImpl}} > was called on each of the 7 DNs the block were ever placed. All DNs but > ps2204 and ps3765 failed because of genstamp comparison - that's expected. > ps2204 and ps3765 have gone past the comparison (since no exceptions from > their logs), but {{updateReplicaUnderRecovery}} only appeared to be called on > ps3765. > This jira is to propose we log more details when {{BlockRecoveryWorker}} is > about to call {{updateReplicaUnderRecovery}} on the DataNodes, so this could > be figured out in the future. > {noformat} > $ grep "updateReplica:" ps2204.dn.log > $ grep "updateReplica:" ps3765.dn.log > hadoop-hdfs-datanode-ps3765.log.2:{"@timestamp":"2017-09-13T00:56:20.933Z","source_host":"ps3765.example.com","file":"FsDatasetImpl.java","method":"updateReplicaUnderRecovery","level":"INFO","line_number":"2512","thread_name":"IPC > Server handler 6 on > 50020","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"updateReplica: > BP-550436645-17.142.147.13-1438988035284:blk_2172795728_1106150312, > recoveryId=1107074793, length=65024, replica=ReplicaUnderRecovery, > blk_2172795728_1106150312, RUR > $ grep "initReplicaRecovery:" ps2204.dn.log > hadoop-hdfs-datanode-ps2204.log.1:{"@timestamp":"2017-09-13T00:56:20.691Z","source_host":"ps2204.example.com","file":"FsDatasetImpl.java","method":"initReplicaRecoveryImpl","level":"INFO","line_number":"2441","thread_name":"org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@5ae3cb26","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"initReplicaRecovery: > blk_2172795728_1106150312, recoveryId=1107074793, > replica=ReplicaWaitingToBeRecovered, blk_2172795728_1106150312, RWR >
[jira] [Created] (HDFS-12642) Log block and datanode details in BlockRecoveryWorker
Xiao Chen created HDFS-12642: Summary: Log block and datanode details in BlockRecoveryWorker Key: HDFS-12642 URL: https://issues.apache.org/jira/browse/HDFS-12642 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Reporter: Xiao Chen Assignee: Xiao Chen In a recent investigation, we have seen a weird block recovery issue, which is difficult to reach to a conclusion because of insufficient logs. For the most critical part of the events, we see block recovery failed to {{commitBlockSynchronization]} on the NN, due to the block not closed. This leaves the file as open forever (for 1+ months). The reason the block was not closed on NN, was because it is configured with {{dfs.namenode.replication.min}} =2, and only 1 replica was with the latest genstamp. We were not able to tell why only 1 replica is on latest genstamp. >From the primary node of the recovery (ps2204), {{initReplicaRecoveryImpl}} >was called on each of the 7 DNs the block were ever placed. All DNs but ps2204 >and ps3765 failed because of genstamp comparison - that's expected. ps2204 and >ps3765 have gone past the comparison (since no exceptions from their logs), >but {{updateReplicaUnderRecovery}} only appeared to be called on ps3765. This jira is to propose we log more details when {{BlockRecoveryWorker}} is about to call {{updateReplicaUnderRecovery}} on the DataNodes, so this could be figured out in the future. {noformat} $ grep "updateReplica:" ps2204.dn.log $ grep "updateReplica:" ps3765.dn.log hadoop-hdfs-datanode-ps3765.log.2:{"@timestamp":"2017-09-13T00:56:20.933Z","source_host":"ps3765.example.com","file":"FsDatasetImpl.java","method":"updateReplicaUnderRecovery","level":"INFO","line_number":"2512","thread_name":"IPC Server handler 6 on 50020","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"updateReplica: BP-550436645-17.142.147.13-1438988035284:blk_2172795728_1106150312, recoveryId=1107074793, length=65024, replica=ReplicaUnderRecovery, blk_2172795728_1106150312, RUR $ grep "initReplicaRecovery:" ps2204.dn.log hadoop-hdfs-datanode-ps2204.log.1:{"@timestamp":"2017-09-13T00:56:20.691Z","source_host":"ps2204.example.com","file":"FsDatasetImpl.java","method":"initReplicaRecoveryImpl","level":"INFO","line_number":"2441","thread_name":"org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@5ae3cb26","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"initReplicaRecovery: blk_2172795728_1106150312, recoveryId=1107074793, replica=ReplicaWaitingToBeRecovered, blk_2172795728_1106150312, RWR hadoop-hdfs-datanode-ps2204.log.1:{"@timestamp":"2017-09-13T00:56:20.691Z","source_host":"ps2204.example.com","file":"FsDatasetImpl.java","method":"initReplicaRecoveryImpl","level":"INFO","line_number":"2497","thread_name":"org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@5ae3cb26","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"initReplicaRecovery: changing replica state for blk_2172795728_1106150312 from RWR to RUR","class":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","mdc":{}} $ grep "initReplicaRecovery:" ps3765.dn.log hadoop-hdfs-datanode-ps3765.log.2:{"@timestamp":"2017-09-13T00:56:20.457Z","source_host":"ps3765.example.com","file":"FsDatasetImpl.java","method":"initReplicaRecoveryImpl","level":"INFO","line_number":"2441","thread_name":"IPC Server handler 5 on 50020","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"initReplicaRecovery: blk_2172795728_1106150312, recoveryId=1107074793, replica=ReplicaBeingWritten, blk_2172795728_1106150312, RBW hadoop-hdfs-datanode-ps3765.log.2:{"@timestamp":"2017-09-13T00:56:20.457Z","source_host":"ps3765.example.com","file":"FsDatasetImpl.java","method":"initReplicaRecoveryImpl","level":"INFO","line_number":"2441","thread_name":"IPC Server handler 5 on 50020","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"initReplicaRecovery: blk_2172795728_1106150312, recoveryId=1107074793, replica=ReplicaBeingWritten, blk_2172795728_1106150312, RBW hadoop-hdfs-datanode-ps3765.log.2:{"@timestamp":"2017-09-13T00:56:20.457Z","source_host":"ps3765.example.com","file":"FsDatasetImpl.java","method":"initReplicaRecoveryImpl","level":"INFO","line_number":"2497","thread_name":"IPC Server handler 5 on 50020","@version":1,"logger_name":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","message":"initReplicaRecovery: changing replica state for blk_2172795728_1106150312 from RBW to RUR","class":"org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl","mdc":{}} {noformat} P.S. HDFS-11499
[jira] [Commented] (HDFS-12415) Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails
[ https://issues.apache.org/jira/browse/HDFS-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201426#comment-16201426 ] Hadoop QA commented on HDFS-12415: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 32s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.ozone.tools.TestCorona | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12415 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891610/HDFS-12415-HDFS-7240.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e9c612fa5fb4 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 034f01a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21662/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21662/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201413#comment-16201413 ] Jiandan Yang commented on HDFS-12638: -- We found missing blockId by metasave, and did fsck -blockId, NN also throw NPE, and the inode to which the block blongs was truncated after created. create log: {code:java} hadoop-hadoop-namenode-**.log.9:2017-10-09 19:19:16,370 INFO [IPC Server handler 902 on 8020] org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1084203820_11907141, replicas=11.251.153.26:50010, 11.251.153.29:50010, 11.227.70.75:50010 for /user/admin/xxx {code} because auditlog was overrided, we can not found operation about this file fsck -blockId log: {code:java} 2017-10-12 11:22:03,929 WARN [502920422@qtp-1473771722-3789] org.apache.hadoop.hdfs.server.namenode.NameNode: Error in looking up block java.lang.NullPointerException at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.blockIdCK(NamenodeFsck.java:259) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:323) at org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:69) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804) at org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1351) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) {code} > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at >
[jira] [Comment Edited] (HDFS-12570) [SPS]: Refactor Co-ordinator datanode logic to track the block storage movements
[ https://issues.apache.org/jira/browse/HDFS-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198546#comment-16198546 ] Uma Maheswara Rao G edited comment on HDFS-12570 at 10/12/17 4:03 AM: -- [~rakeshr] Thank you for updating the patch. Patch mostly looks good to me, except the below comments * . {code} public static final String DFS_STORAGE_POLICY_SATISFIER_SHARE_EQUAL_REPLICA_MAX_STREAMS_KEY = + "dfs.storage.policy.satisfier.share.equal.replication.max-streams"; + public static final boolean DFS_STORAGE_POLICY_SATISFIER_SHARE_EQUAL_REPLICA_MAX_STREAMS_DEFAULT = + false; {code} I am thinking this config can be more meaningful. How about something like, "dfs.storage.policy.satisfier.low.max-streams.preference”. This can be true by default. Any other better names are most welcomed. * finshedBlksIter — finishedBlksIter * We are removing the state: status = BlocksMovingAnalysisStatus.FEW_BLOCKS_TARGETS_PAIRED; how are we covering the case when we can’t find any targets at all even though there are movement needed blocks? What is the status for this? * . {code} // 'BlockCollectionId' is used as the tracking ID. All the blocks under +// this +// blockCollectionID will be added to this datanode. +// TODO: assign to each target node {code} Please remove this comment or update. * . {code} +((DatanodeDescriptor) blkMovingInfo.getTarget()) +.addBlocksToMoveStorage(blkMovingInfo); {code} Could we also increment scheduled block count for this node? * BlockStorageMovementCommand: We removed tracked, but class java doc still representing old behavior. * Documentation needs update. was (Author: umamaheswararao): [~rakeshr] Thank you for updating the patch. Patch mostly looks good to me, except the below comments * . {code} public static final String DFS_STORAGE_POLICY_SATISFIER_SHARE_EQUAL_REPLICA_MAX_STREAMS_KEY = + "dfs.storage.policy.satisfier.share.equal.replication.max-streams"; + public static final boolean DFS_STORAGE_POLICY_SATISFIER_SHARE_EQUAL_REPLICA_MAX_STREAMS_DEFAULT = + false; {code} I am thinking this config can be more meaningful. How about something like, "dfs.storage.policy.satisfier.low.max-streams.preference”. This can be true by default. Any other better names are most welcomed. * finshedBlksIter — finihedBlksIter * We are removing the state: status = BlocksMovingAnalysisStatus.FEW_BLOCKS_TARGETS_PAIRED; how are we covering the case when we can’t find any targets at all even though there are movement needed blocks? What is the status for this? * . {code} // 'BlockCollectionId' is used as the tracking ID. All the blocks under +// this +// blockCollectionID will be added to this datanode. +// TODO: assign to each target node {code} Please remove this comment or update. * . {code} +((DatanodeDescriptor) blkMovingInfo.getTarget()) +.addBlocksToMoveStorage(blkMovingInfo); {code} Could we also increment scheduled block count for this node? * BlockStorageMovementCommand: We removed tracked, but class java doc still representing old behavior. > [SPS]: Refactor Co-ordinator datanode logic to track the block storage > movements > > > Key: HDFS-12570 > URL: https://issues.apache.org/jira/browse/HDFS-12570 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-12570-HDFS-10285-00.patch, > HDFS-12570-HDFS-10285-01.patch, HDFS-12570-HDFS-10285-02.patch > > > This task is to refactor the C-DN block storage movements. Basically, the > idea is to move the scheduling and tracking logic to Namenode rather than at > the special C-DN. Please refer the discussion with [~andrew.wang] to > understand the [background and the necessity of > refactoring|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16141060=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16141060]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201395#comment-16201395 ] SammiChen commented on HDFS-12613: -- Hi [~eddyxu], agree. Check NULL pointer in native code is a must-have. Check NULL pointer at JAVA level is a nice-to-have to avoid one JNI call. > Native EC coder should implement release() as idempotent function. > -- > > Key: HDFS-12613 > URL: https://issues.apache.org/jira/browse/HDFS-12613 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HDFS-12613.00.patch, HDFS-12613.01.patch, > HDFS-12613.02.patch > > > Recently, we found native EC coder crashes JVM because > {{NativeRSDecoder#release()}} being called multiple times (HDFS-12612 and > HDFS-12606). > We should strength the implement the native code to make {{release()}} > idempotent as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol
[ https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201359#comment-16201359 ] Hadoop QA commented on HDFS-12549: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 6s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 33s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}214m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 42s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}341m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Return value of org.apache.hadoop.ozone.web.exceptions.ErrorTable.newError(OzoneException, UserArgs) ignored, but method has no side effect At Simple.java:ignored, but method has no side effect At Simple.java:[line 115] | | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | |
[jira] [Commented] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201349#comment-16201349 ] Hadoop QA commented on HDFS-12613: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 45s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 45s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 45s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 0m 30s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 23s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.fs.TestHdfsNativeCodeLoader | | | hadoop.hdfs.server.namenode.TestFSImage | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12613 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891589/HDFS-12613.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux b15efd641364 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git
[jira] [Comment Edited] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201343#comment-16201343 ] Jiandan Yang edited comment on HDFS-12638 at 10/12/17 2:21 AM: [~daryn] There is no snapshot directory in cluster, and we could not find block info in the log, and there are logs of truncate cmd in auditlog. In active NN WebUI there are 2000+ missing blocks, but fsck result do not include missing replicas. And crashed NN become standby successfully by restart. was (Author: yangjiandan): [~daryn] There is no snapshot directory in cluster, and we could not find block info in the log, and there are logs of truncate cmd in auditlog. In active NN WebUI there are 2000+ missing blocks, but fsck result do not include missing replicas. > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201343#comment-16201343 ] Jiandan Yang commented on HDFS-12638: -- [~daryn] There is no snapshot directory in cluster, and we could not find block info in the log, and there are logs of truncate cmd in auditlog. In active NN WebUI there are 2000+ missing blocks, but fsck result do not include missing replicas. > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12637: Status: Patch Available (was: Open) > Extend TestDistributedFileSystemWithECFile with a random EC policy > -- > > Key: HDFS-12637 > URL: https://issues.apache.org/jira/browse/HDFS-12637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12637.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12637: Attachment: HDFS-12637.1.patch Uploaded the 1st patch. The new test class with a random ec policy extends {{TestDistributedFileSystemWithECFile}} with little change. I checked all EC policies pass the tests in my local computer. > Extend TestDistributedFileSystemWithECFile with a random EC policy > -- > > Key: HDFS-12637 > URL: https://issues.apache.org/jira/browse/HDFS-12637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12637.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12415) Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails
[ https://issues.apache.org/jira/browse/HDFS-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12415: - Attachment: HDFS-12415-HDFS-7240.005.patch patch v5 fixes the checkstyle issue. > Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails > > > Key: HDFS-12415 > URL: https://issues.apache.org/jira/browse/HDFS-12415 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-12415-HDFS-7240.001.patch, > HDFS-12415-HDFS-7240.002.patch, HDFS-12415-HDFS-7240.003.patch, > HDFS-12415-HDFS-7240.004.patch, HDFS-12415-HDFS-7240.005.patch > > > TestXceiverClientManager seems to be occasionally failing in some jenkins > jobs, > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.ozone.scm.node.SCMNodeManager.getNodeStat(SCMNodeManager.java:828) > at > org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.hasEnoughSpace(SCMCommonPolicy.java:147) > at > org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.lambda$chooseDatanodes$0(SCMCommonPolicy.java:125) > {noformat} > see more from [this > report|https://builds.apache.org/job/PreCommit-HDFS-Build/21065/testReport/] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8538) Change the default volume choosing policy to AvailableSpaceVolumeChoosingPolicy
[ https://issues.apache.org/jira/browse/HDFS-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201326#comment-16201326 ] Hadoop QA commented on HDFS-8538: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 82 unchanged - 0 fixed = 83 total (was 82) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 10s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.server.namenode.TestSecurityTokenEditLog | | | hadoop.hdfs.server.datanode.TestIncrementalBlockReports | | | hadoop.fs.TestFcHdfsSetUMask | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages | | | hadoop.fs.TestSymlinkHdfsFileContext | | | hadoop.hdfs.server.namenode.TestNameNodeRpcServerMethods | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.server.datanode.TestDataNodeLifeline | | | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks | | | hadoop.hdfs.TestReadWhileWriting | | | hadoop.fs.TestFcHdfsPermission | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | |
[jira] [Commented] (HDFS-12620) Backporting HDFS-10467 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201322#comment-16201322 ] Hadoop QA commented on HDFS-12620: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 25 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 39s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 372 unchanged - 0 fixed = 374 total (was 372) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 11 new + 624 unchanged - 0 fixed = 635 total (was 624) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}517m 31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 10m 39s{color} | {color:red} The patch generated 152 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}547m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestMetadataVersionOutput | | | hadoop.hdfs.server.namenode.TestAddBlock | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | Timed out junit tests | org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend | | | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.server.namenode.TestStartup | | | org.apache.hadoop.hdfs.TestEncryptionZonesWithHA | | | org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement | | | org.apache.hadoop.hdfs.server.namenode.TestAllowFormat | | | org.apache.hadoop.hdfs.TestHdfsAdmin | | | org.apache.hadoop.hdfs.TestFileCreationEmpty | | | org.apache.hadoop.hdfs.TestFileCreationClient | | | org.apache.hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache
[ https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201240#comment-16201240 ] Hadoop QA commented on HDFS-11885: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-11885 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11885 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880136/HDFS-11885.004.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21661/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > createEncryptionZone should not block on initializing EDEK cache > > > Key: HDFS-11885 > URL: https://issues.apache.org/jira/browse/HDFS-11885 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, > HDFS-11885.003.patch, HDFS-11885.004.patch > > > When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which > calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, > which attempts to fill the key cache up to the low watermark. > If the KMS is down or slow, this can take a very long time, and cause the > createZone RPC to fail with a timeout. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201214#comment-16201214 ] Lei (Eddy) Xu edited comment on HDFS-12613 at 10/11/17 11:47 PM: - Thanks for the reviews, [~drankye] and [~Sammi] bq. note "NativeRSRawDecoder" should be "NativeRSRawEncoder". Done bq. Apart from add synchronized on release, performEncodeImpl and performDecodeImpl can also have the synchronized keyword Done. bq. If it's already null, we don't need to call the native code through JNI. Good point. If we go this route, we should {{throw IOException}} in java, to notify the client that the coder is closed. I prefer does it in the JNI as the logic of setting it to NULL is in JNI. But I am fine either way. What do you think. This patch also add {{IOException}} to {{encode}} / {{decode}} signatures. was (Author: eddyxu): Thanks for the reviews, [~drankye] and [~Sammi] bq. note "NativeRSRawDecoder" should be "NativeRSRawEncoder". Done bq. Apart from add synchronized on release, performEncodeImpl and performDecodeImpl can also have the synchronized keyword Done. bq. If it's already null, we don't need to call the native code through JNI. Good point. If we go this route, we should {{throw IOException}} in java, to notify the client that the coder is closed. I prefer does it in the JNI as the logic of setting it to NULL is in JNI. But I am fine either way. What do you think. > Native EC coder should implement release() as idempotent function. > -- > > Key: HDFS-12613 > URL: https://issues.apache.org/jira/browse/HDFS-12613 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HDFS-12613.00.patch, HDFS-12613.01.patch, > HDFS-12613.02.patch > > > Recently, we found native EC coder crashes JVM because > {{NativeRSDecoder#release()}} being called multiple times (HDFS-12612 and > HDFS-12606). > We should strength the implement the native code to make {{release()}} > idempotent as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache
[ https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201216#comment-16201216 ] Subru Krishnan commented on HDFS-11885: --- Pushing it out from 2.9.0 based on [~shahrs87]'s comment. Feel free to revert if required. > createEncryptionZone should not block on initializing EDEK cache > > > Key: HDFS-11885 > URL: https://issues.apache.org/jira/browse/HDFS-11885 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, > HDFS-11885.003.patch, HDFS-11885.004.patch > > > When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which > calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, > which attempts to fill the key cache up to the low watermark. > If the KMS is down or slow, this can take a very long time, and cause the > createZone RPC to fail with a timeout. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache
[ https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-11885: -- Target Version/s: 2.8.3, 3.0.0, 2.9.1 (was: 2.9.0, 2.8.3, 3.0.0) > createEncryptionZone should not block on initializing EDEK cache > > > Key: HDFS-11885 > URL: https://issues.apache.org/jira/browse/HDFS-11885 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, > HDFS-11885.003.patch, HDFS-11885.004.patch > > > When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which > calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, > which attempts to fill the key cache up to the low watermark. > If the KMS is down or slow, this can take a very long time, and cause the > createZone RPC to fail with a timeout. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12613: - Attachment: HDFS-12613.02.patch Thanks for the reviews, [~drankye] and [~Sammi] bq. note "NativeRSRawDecoder" should be "NativeRSRawEncoder". Done bq. Apart from add synchronized on release, performEncodeImpl and performDecodeImpl can also have the synchronized keyword Done. bq. If it's already null, we don't need to call the native code through JNI. Good point. If we go this route, we should {{throw IOException}} in java, to notify the client that the coder is closed. I prefer does it in the JNI as the logic of setting it to NULL is in JNI. But I am fine either way. What do you think. > Native EC coder should implement release() as idempotent function. > -- > > Key: HDFS-12613 > URL: https://issues.apache.org/jira/browse/HDFS-12613 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HDFS-12613.00.patch, HDFS-12613.01.patch, > HDFS-12613.02.patch > > > Recently, we found native EC coder crashes JVM because > {{NativeRSDecoder#release()}} being called multiple times (HDFS-12612 and > HDFS-12606). > We should strength the implement the native code to make {{release()}} > idempotent as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12555) HDFS federation should support configure secondary directory
[ https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201210#comment-16201210 ] Bharat Viswanadham commented on HDFS-12555: --- And also one more question, if you have federation setup and you have this kind of configuration what ever directories previously have been created(/user/hive) on suppose nn1, will not be there in nn2 right. After your fix /user/hive will move to new namenode right(nn2)? On the new namenode the old files will not be there right? So, this should be a one time setup thing. Please let me know is my understanding correct or not? > HDFS federation should support configure secondary directory > - > > Key: HDFS-12555 > URL: https://issues.apache.org/jira/browse/HDFS-12555 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation > Environment: 2.6.0-cdh5.10.0 >Reporter: luoge123 > Fix For: 2.6.0 > > Attachments: HDFS-12555.001.patch > > > HDFS federation support multiple namenodes horizontally scales the file > system namespace. As the amount of data grows, using a single group of > namenodes to manage a single directory, namenode still achieves performance > bottlenecks. In order to reduce the pressure of namenode, we can split out > the secondary directory, and manager it by a new namenode. This is > transparent for users. > For example, nn1 only manager the /user directory, when nn1 achieve > performance bottlenecks, we can split out /user/hive directory, and ues nn2 > to manager it. > That means core-site.xml should support as follows configuration. > >fs.viewfs.mounttable.nsX.link./user >hdfs://nn1:8020/user > > >fs.viewfs.mounttable.nsX.link./user/hive >hdfs://nn2:8020/user/hive > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin
[ https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-12257: -- Target Version/s: 2.8.3, 3.0.0, 2.9.1 (was: 2.9.0, 2.8.3, 3.0.0) > Expose getSnapshottableDirListing as a public API in HdfsAdmin > -- > > Key: HDFS-12257 > URL: https://issues.apache.org/jira/browse/HDFS-12257 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, > HDFS-12257.003.patch > > > Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no > programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we > should expose listing there as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12555) HDFS federation should support configure secondary directory
[ https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201110#comment-16201110 ] Bharat Viswanadham edited comment on HDFS-12555 at 10/11/17 11:37 PM: -- Hi [~luoge123] In the property fs.viewfs.mounttable.nsX.link.<> the link is virtual, so we can configure as below and achieve the samething currently right? fs.viewfs.mounttable.nsX.link./virtual hdfs://nn2:8020/user/hive Please let me know if i am missing something? was (Author: bharatviswa): Hi [~luoge123] In the property s.viewfs.mounttable.nsX.link.<> the link is virtual, so we can configure as below and achieve the samething currently right? fs.viewfs.mounttable.nsX.link./virtual hdfs://nn2:8020/user/hive Please let me know if i am missing something? > HDFS federation should support configure secondary directory > - > > Key: HDFS-12555 > URL: https://issues.apache.org/jira/browse/HDFS-12555 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation > Environment: 2.6.0-cdh5.10.0 >Reporter: luoge123 > Fix For: 2.6.0 > > Attachments: HDFS-12555.001.patch > > > HDFS federation support multiple namenodes horizontally scales the file > system namespace. As the amount of data grows, using a single group of > namenodes to manage a single directory, namenode still achieves performance > bottlenecks. In order to reduce the pressure of namenode, we can split out > the secondary directory, and manager it by a new namenode. This is > transparent for users. > For example, nn1 only manager the /user directory, when nn1 achieve > performance bottlenecks, we can split out /user/hive directory, and ues nn2 > to manager it. > That means core-site.xml should support as follows configuration. > >fs.viewfs.mounttable.nsX.link./user >hdfs://nn1:8020/user > > >fs.viewfs.mounttable.nsX.link./user/hive >hdfs://nn2:8020/user/hive > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12200) Optimize CachedDNSToSwitchMapping to avoid 100% cpu utilization
[ https://issues.apache.org/jira/browse/HDFS-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201192#comment-16201192 ] Hadoop QA commented on HDFS-12200: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 1 new + 456 unchanged - 0 fixed = 457 total (was 456) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 8m 51s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}188m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12200 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879096/HDFS-12200-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 3aed150cd705 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-12212) Options.Rename.To_TRASH is considered even when Options.Rename.NONE is specified
[ https://issues.apache.org/jira/browse/HDFS-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201191#comment-16201191 ] Hadoop QA commented on HDFS-12212: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | org.apache.hadoop.hdfs.TestDFSPermission | | | org.apache.hadoop.hdfs.TestRestartDFS | | | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication | | | org.apache.hadoop.cli.TestHDFSCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12212 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879333/HDFS-12212-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname
[jira] [Commented] (HDFS-10743) MiniDFSCluster test runtimes can be drastically reduce
[ https://issues.apache.org/jira/browse/HDFS-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201188#comment-16201188 ] Kuhu Shukla commented on HDFS-10743: Apologies for missing your comment [~asuresh]. Thanks [~subru], it is a little tricky since changing the interval values can manifest other issues/races. Will get back to this before the next release. > MiniDFSCluster test runtimes can be drastically reduce > -- > > Key: HDFS-10743 > URL: https://issues.apache.org/jira/browse/HDFS-10743 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 2.0.0-alpha >Reporter: Daryn Sharp >Assignee: Kuhu Shukla > Attachments: HDFS-10743.001.patch, HDFS-10743.002.patch, > HDFS-10743.003.patch > > > {{MiniDFSCluster}} tests have excessive runtimes. The main problem appears > to be the heartbeat interval. The NN may have to wait up to 3s (default > value) for all DNs to heartbeat, triggering registration, so NN can go > active. Tests that repeatedly restart the NN are severely affected. > Example for varying heartbeat intervals for {{TestFSImageWithAcl}}: > * 3s = ~70s -- (disgusting, why I investigated) > * 1s = ~27s > * 500ms = ~17s -- (had to hack DNConf for millisecond precision) > That a 4x improvement in runtime. > 17s is still excessively long for what the test does. Further areas to > explore when running tests: > * Reduce numerous sleeps intervals in DN's {{BPServiceActor}}. > * Ensure heartbeats and initial BR are sent immediately upon (re)registration. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7304) TestFileCreation#testOverwriteOpenForWrite hangs
[ https://issues.apache.org/jira/browse/HDFS-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201181#comment-16201181 ] Hadoop QA commented on HDFS-7304: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} HDFS-7304 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-7304 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12680976/HDFS-7304.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21658/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestFileCreation#testOverwriteOpenForWrite hangs > > > Key: HDFS-7304 > URL: https://issues.apache.org/jira/browse/HDFS-7304 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Akira Ajisaka > Attachments: HDFS-7304.patch, HDFS-7304.patch > > > The test case times out. It has been observed in multiple pre-commit builds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201182#comment-16201182 ] Subru Krishnan commented on HDFS-11754: --- [~erofeev]/[~shahrs87], do you intend to get this for 2.9.0 as patch is available? > Make FsServerDefaults cache configurable. > - > > Key: HDFS-11754 > URL: https://issues.apache.org/jira/browse/HDFS-11754 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Mikhail Erofeev >Priority: Minor > Labels: newbie > Fix For: 2.9.0 > > Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, > HDFS-11754.003.patch, HDFS-11754.004.patch > > > DFSClient caches the result of FsServerDefaults for 60 minutes. > But the 60 minutes time is not configurable. > Continuing the discussion from HDFS-11702, it would be nice if we can make > this configurable and make the default as 60 minutes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201177#comment-16201177 ] Hadoop QA commented on HDFS-12553: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 281 unchanged - 16 fixed = 281 total (was 297) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12553 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891570/HDFS-12553.11.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 2036ed997326 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8acdf5c | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (HDFS-11214) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HDFS-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201176#comment-16201176 ] Subru Krishnan commented on HDFS-11214: --- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Upgrade netty-all to 4.1.1.Final > > > Key: HDFS-11214 > URL: https://issues.apache.org/jira/browse/HDFS-11214 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Ted Yu > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, > HDFS-11214.v7.patch > > > Upgrade Netty > this is a clone of HADOOP-13866, created to kick off yetus on HDFS, that > being where netty is used -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11214) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HDFS-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-11214: -- Target Version/s: 3.1.0 (was: 2.9.0) > Upgrade netty-all to 4.1.1.Final > > > Key: HDFS-11214 > URL: https://issues.apache.org/jira/browse/HDFS-11214 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Ted Yu > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, > HDFS-11214.v7.patch > > > Upgrade Netty > this is a clone of HADOOP-13866, created to kick off yetus on HDFS, that > being where netty is used -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10743) MiniDFSCluster test runtimes can be drastically reduce
[ https://issues.apache.org/jira/browse/HDFS-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201168#comment-16201168 ] Subru Krishnan commented on HDFS-10743: --- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > MiniDFSCluster test runtimes can be drastically reduce > -- > > Key: HDFS-10743 > URL: https://issues.apache.org/jira/browse/HDFS-10743 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 2.0.0-alpha >Reporter: Daryn Sharp >Assignee: Kuhu Shukla > Attachments: HDFS-10743.001.patch, HDFS-10743.002.patch, > HDFS-10743.003.patch > > > {{MiniDFSCluster}} tests have excessive runtimes. The main problem appears > to be the heartbeat interval. The NN may have to wait up to 3s (default > value) for all DNs to heartbeat, triggering registration, so NN can go > active. Tests that repeatedly restart the NN are severely affected. > Example for varying heartbeat intervals for {{TestFSImageWithAcl}}: > * 3s = ~70s -- (disgusting, why I investigated) > * 1s = ~27s > * 500ms = ~17s -- (had to hack DNConf for millisecond precision) > That a 4x improvement in runtime. > 17s is still excessively long for what the test does. Further areas to > explore when running tests: > * Reduce numerous sleeps intervals in DN's {{BPServiceActor}}. > * Ensure heartbeats and initial BR are sent immediately upon (re)registration. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10743) MiniDFSCluster test runtimes can be drastically reduce
[ https://issues.apache.org/jira/browse/HDFS-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-10743: -- Target Version/s: 3.1.0 (was: 2.9.0) > MiniDFSCluster test runtimes can be drastically reduce > -- > > Key: HDFS-10743 > URL: https://issues.apache.org/jira/browse/HDFS-10743 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 2.0.0-alpha >Reporter: Daryn Sharp >Assignee: Kuhu Shukla > Attachments: HDFS-10743.001.patch, HDFS-10743.002.patch, > HDFS-10743.003.patch > > > {{MiniDFSCluster}} tests have excessive runtimes. The main problem appears > to be the heartbeat interval. The NN may have to wait up to 3s (default > value) for all DNs to heartbeat, triggering registration, so NN can go > active. Tests that repeatedly restart the NN are severely affected. > Example for varying heartbeat intervals for {{TestFSImageWithAcl}}: > * 3s = ~70s -- (disgusting, why I investigated) > * 1s = ~27s > * 500ms = ~17s -- (had to hack DNConf for millisecond precision) > That a 4x improvement in runtime. > 17s is still excessively long for what the test does. Further areas to > explore when running tests: > * Reduce numerous sleeps intervals in DN's {{BPServiceActor}}. > * Ensure heartbeats and initial BR are sent immediately upon (re)registration. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10274) Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode
[ https://issues.apache.org/jira/browse/HDFS-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201165#comment-16201165 ] Subru Krishnan commented on HDFS-10274: --- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode > - > > Key: HDFS-10274 > URL: https://issues.apache.org/jira/browse/HDFS-10274 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HDFS-10274-01.patch > > > To reduce the number of methods in Namesystem interface and for clean looking > refactor, its better to move {{isInStartupSafeMode()}} to BlockManager and > BlockManagerSafeMode, as most of the callers are in BlockManager. So one more > interface overhead can be reduced. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11755) Underconstruction blocks can be considered missing
[ https://issues.apache.org/jira/browse/HDFS-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201164#comment-16201164 ] Wei-Chiu Chuang commented on HDFS-11755: Filed HDFS-12641 to initiate the discussion. > Underconstruction blocks can be considered missing > -- > > Key: HDFS-11755 > URL: https://issues.apache.org/jira/browse/HDFS-11755 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha2, 2.8.1 >Reporter: Nathan Roberts >Assignee: Nathan Roberts > Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2 > > Attachments: HDFS-11755-branch-2.002.patch, > HDFS-11755-branch-2.8.002.patch, HDFS-11755.001.patch, HDFS-11755.002.patch > > > Following sequence of events can lead to a block underconstruction being > considered missing. > - pipeline of 3 DNs, DN1->DN2->DN3 > - DN3 has a failing disk so some updates take a long time > - Client writes entire block and is waiting for final ack > - DN1, DN2 and DN3 have all received the block > - DN1 is waiting for ACK from DN2 who is waiting for ACK from DN3 > - DN3 is having trouble finalizing the block due to the failing drive. It > does eventually succeed but it is VERY slow at doing so. > - DN2 times out waiting for DN3 and tears down its pieces of the pipeline, so > DN1 notices and does the same. Neither DN1 nor DN2 finalized the block. > - DN3 finally sends an IBR to the NN indicating the block has been received. > - Drive containing the block on DN3 fails enough that the DN takes it offline > and notifies NN of failed volume > - NN removes DN3's replica from the triplets and then declares the block > missing because there are no other replicas > Seems like we shouldn't consider uncompleted blocks for replication. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10274) Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode
[ https://issues.apache.org/jira/browse/HDFS-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-10274: -- Target Version/s: 3.1.0 (was: 2.9.0) > Move NameSystem#isInStartupSafeMode() to BlockManagerSafeMode > - > > Key: HDFS-10274 > URL: https://issues.apache.org/jira/browse/HDFS-10274 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HDFS-10274-01.patch > > > To reduce the number of methods in Namesystem interface and for clean looking > refactor, its better to move {{isInStartupSafeMode()}} to BlockManager and > BlockManagerSafeMode, as most of the callers are in BlockManager. So one more > interface overhead can be reduced. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10237) Support specifying checksum type in WebHDFS/HTTPFS writers
[ https://issues.apache.org/jira/browse/HDFS-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201160#comment-16201160 ] Subru Krishnan commented on HDFS-10237: --- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Support specifying checksum type in WebHDFS/HTTPFS writers > -- > > Key: HDFS-10237 > URL: https://issues.apache.org/jira/browse/HDFS-10237 > Project: Hadoop HDFS > Issue Type: New Feature > Components: webhdfs >Affects Versions: 2.8.0 >Reporter: Harsh J >Assignee: Harsh J >Priority: Minor > Attachments: HDFS-10237.000.patch, HDFS-10237.001.patch, > HDFS-10237.002.patch, HDFS-10237.002.patch > > > Currently you cannot set a desired checksum type over a WebHDFS or HTTPFS > writer, as you can with the regular DFS writer (done via HADOOP-8240) > This JIRA covers the changes necessary to bring the same ability to WebHDFS > and HTTPFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10237) Support specifying checksum type in WebHDFS/HTTPFS writers
[ https://issues.apache.org/jira/browse/HDFS-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-10237: -- Target Version/s: 3.1.0 (was: 2.9.0) > Support specifying checksum type in WebHDFS/HTTPFS writers > -- > > Key: HDFS-10237 > URL: https://issues.apache.org/jira/browse/HDFS-10237 > Project: Hadoop HDFS > Issue Type: New Feature > Components: webhdfs >Affects Versions: 2.8.0 >Reporter: Harsh J >Assignee: Harsh J >Priority: Minor > Attachments: HDFS-10237.000.patch, HDFS-10237.001.patch, > HDFS-10237.002.patch, HDFS-10237.002.patch > > > Currently you cannot set a desired checksum type over a WebHDFS or HTTPFS > writer, as you can with the regular DFS writer (done via HADOOP-8240) > This JIRA covers the changes necessary to bring the same ability to WebHDFS > and HTTPFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8538) Change the default volume choosing policy to AvailableSpaceVolumeChoosingPolicy
[ https://issues.apache.org/jira/browse/HDFS-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201156#comment-16201156 ] Subru Krishnan commented on HDFS-8538: -- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Change the default volume choosing policy to > AvailableSpaceVolumeChoosingPolicy > --- > > Key: HDFS-8538 > URL: https://issues.apache.org/jira/browse/HDFS-8538 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.7.0 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: hdfs-8538.001.patch > > > For datanodes with different sized disks, they almost always want the > available space policy. Users with homogenous disks are unaffected. > Since this code has baked for a while, let's change it to be the default. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8538) Change the default volume choosing policy to AvailableSpaceVolumeChoosingPolicy
[ https://issues.apache.org/jira/browse/HDFS-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-8538: - Target Version/s: 3.0.0 (was: 2.9.0) > Change the default volume choosing policy to > AvailableSpaceVolumeChoosingPolicy > --- > > Key: HDFS-8538 > URL: https://issues.apache.org/jira/browse/HDFS-8538 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.7.0 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: hdfs-8538.001.patch > > > For datanodes with different sized disks, they almost always want the > available space policy. Users with homogenous disks are unaffected. > Since this code has baked for a while, let's change it to be the default. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7550) Minor followon cleanups from HDFS-7543
[ https://issues.apache.org/jira/browse/HDFS-7550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201153#comment-16201153 ] Subru Krishnan commented on HDFS-7550: -- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Minor followon cleanups from HDFS-7543 > -- > > Key: HDFS-7550 > URL: https://issues.apache.org/jira/browse/HDFS-7550 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.7.0 >Reporter: Charles Lamb >Priority: Minor > Attachments: HDFS-7550.001.patch > > > The commit of HDFS-7543 crossed paths with these comments: > FSDirMkdirOp.java > in #mkdirs, you removed the final String srcArg = src. This should be left > in. Many IDEs will whine about making assignments to formal args and that's > why it was put in in the first place. > FSDirRenameOp.java > #renameToInt, dstIIP (and resultingStat) could benefit from final's. > FSDirXAttrOp.java > I'm not sure why you've moved the call to getINodesInPath4Write and > checkXAttrChangeAccess inside the writeLock. > FSDirStatAndListing.java > The javadoc for the @param src needs to be changed to reflect that it's an > INodesInPath, not a String. Nit: it might be better to rename the > INodesInPath arg from src to iip. > #getFileInfo4DotSnapshot is now unused since you in-lined it into > #getFileInfo. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7550) Minor followon cleanups from HDFS-7543
[ https://issues.apache.org/jira/browse/HDFS-7550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-7550: - Target Version/s: 3.1.0 (was: 2.9.0) > Minor followon cleanups from HDFS-7543 > -- > > Key: HDFS-7550 > URL: https://issues.apache.org/jira/browse/HDFS-7550 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.7.0 >Reporter: Charles Lamb >Priority: Minor > Attachments: HDFS-7550.001.patch > > > The commit of HDFS-7543 crossed paths with these comments: > FSDirMkdirOp.java > in #mkdirs, you removed the final String srcArg = src. This should be left > in. Many IDEs will whine about making assignments to formal args and that's > why it was put in in the first place. > FSDirRenameOp.java > #renameToInt, dstIIP (and resultingStat) could benefit from final's. > FSDirXAttrOp.java > I'm not sure why you've moved the call to getINodesInPath4Write and > checkXAttrChangeAccess inside the writeLock. > FSDirStatAndListing.java > The javadoc for the @param src needs to be changed to reflect that it's an > INodesInPath, not a String. Nit: it might be better to rename the > INodesInPath arg from src to iip. > #getFileInfo4DotSnapshot is now unused since you in-lined it into > #getFileInfo. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7408) Add a counter in the log that shows the number of block reports processed
[ https://issues.apache.org/jira/browse/HDFS-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-7408: - Target Version/s: 3.1.0 (was: 2.9.0) > Add a counter in the log that shows the number of block reports processed > - > > Key: HDFS-7408 > URL: https://issues.apache.org/jira/browse/HDFS-7408 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Suresh Srinivas >Assignee: Surendra Singh Lilhore > Attachments: HDFS-7408.001.patch > > > It would be great to have in the info log corresponding to block report > processing, printing information on how many block reports have been > processed. This can be useful to debug when namenode is unresponsive > especially during startup time to understand if datanodes are sending block > reports multiple times. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7408) Add a counter in the log that shows the number of block reports processed
[ https://issues.apache.org/jira/browse/HDFS-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201152#comment-16201152 ] Subru Krishnan commented on HDFS-7408: -- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Add a counter in the log that shows the number of block reports processed > - > > Key: HDFS-7408 > URL: https://issues.apache.org/jira/browse/HDFS-7408 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Suresh Srinivas >Assignee: Surendra Singh Lilhore > Attachments: HDFS-7408.001.patch > > > It would be great to have in the info log corresponding to block report > processing, printing information on how many block reports have been > processed. This can be useful to debug when namenode is unresponsive > especially during startup time to understand if datanodes are sending block > reports multiple times. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7368) Support HDFS specific 'shell' on command 'hdfs dfs' invocation
[ https://issues.apache.org/jira/browse/HDFS-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201151#comment-16201151 ] Subru Krishnan commented on HDFS-7368: -- Pushing it out from 2.9.0 due to [~aw]'s feedback and lack of recent activity. Feel free to revert if required. > Support HDFS specific 'shell' on command 'hdfs dfs' invocation > -- > > Key: HDFS-7368 > URL: https://issues.apache.org/jira/browse/HDFS-7368 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HDFS-7368-001.patch > > > * *hadoop fs* is the generic implementation for all filesystem > implementations, but some of the operations are supported only in some > filesystems. Ex: snapshot commands, acl commands, xattr commands. > * *hdfs dfs* is recommended in all hdfs related docs in current releases. > In current code both *hdfs shell* and *hadoop fs* points to hadoop common > implementation of FSShell. > It would be better to have HDFS specific extention of FSShell which includes > HDFS only commands in future. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7368) Support HDFS specific 'shell' on command 'hdfs dfs' invocation
[ https://issues.apache.org/jira/browse/HDFS-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-7368: - Target Version/s: 3.1.0 (was: 2.9.0) > Support HDFS specific 'shell' on command 'hdfs dfs' invocation > -- > > Key: HDFS-7368 > URL: https://issues.apache.org/jira/browse/HDFS-7368 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HDFS-7368-001.patch > > > * *hadoop fs* is the generic implementation for all filesystem > implementations, but some of the operations are supported only in some > filesystems. Ex: snapshot commands, acl commands, xattr commands. > * *hdfs dfs* is recommended in all hdfs related docs in current releases. > In current code both *hdfs shell* and *hadoop fs* points to hadoop common > implementation of FSShell. > It would be better to have HDFS specific extention of FSShell which includes > HDFS only commands in future. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7304) TestFileCreation#testOverwriteOpenForWrite hangs
[ https://issues.apache.org/jira/browse/HDFS-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201148#comment-16201148 ] Subru Krishnan commented on HDFS-7304: -- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > TestFileCreation#testOverwriteOpenForWrite hangs > > > Key: HDFS-7304 > URL: https://issues.apache.org/jira/browse/HDFS-7304 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Akira Ajisaka > Attachments: HDFS-7304.patch, HDFS-7304.patch > > > The test case times out. It has been observed in multiple pre-commit builds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7304) TestFileCreation#testOverwriteOpenForWrite hangs
[ https://issues.apache.org/jira/browse/HDFS-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-7304: - Target Version/s: 3.1.0 (was: 2.9.0) > TestFileCreation#testOverwriteOpenForWrite hangs > > > Key: HDFS-7304 > URL: https://issues.apache.org/jira/browse/HDFS-7304 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Akira Ajisaka > Attachments: HDFS-7304.patch, HDFS-7304.patch > > > The test case times out. It has been observed in multiple pre-commit builds. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space
[ https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201147#comment-16201147 ] Subru Krishnan commented on HDFS-3570: -- Pushing it out from 2.9.0 due to lack of recent activity. Feel free to revert if required. > Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used > space > > > Key: HDFS-3570 > URL: https://issues.apache.org/jira/browse/HDFS-3570 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.0.0-alpha >Reporter: Harsh J >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HDFS-3570.003.patch, HDFS-3570.2.patch, > HDFS-3570.aash.1.patch > > > Report from a user here: > https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ, > post archived at http://pastebin.com/eVFkk0A0 > This user had a specific DN that had a large non-DFS usage among > dfs.data.dirs, and very little DFS usage (which is computed against total > possible capacity). > Balancer apparently only looks at the usage, and ignores to consider that > non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a > DFS Usage report from DN is 8% only, its got a lot of free space to write > more blocks, when that isn't true as shown by the case of this user. It went > on scheduling writes to the DN to balance it out, but the DN simply can't > accept any more blocks as a result of its disks' state. > I think it would be better if we _computed_ the actual utilization based on > {{(100-(actual remaining space))/(capacity)}}, as opposed to the current > {{(dfs used)/(capacity)}}. Thoughts? > This isn't very critical, however, cause it is very rare to see DN space > being used for non DN data, but it does expose a valid bug. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space
[ https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HDFS-3570: - Target Version/s: 3.1.0 (was: 2.9.0) > Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used > space > > > Key: HDFS-3570 > URL: https://issues.apache.org/jira/browse/HDFS-3570 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.0.0-alpha >Reporter: Harsh J >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HDFS-3570.003.patch, HDFS-3570.2.patch, > HDFS-3570.aash.1.patch > > > Report from a user here: > https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ, > post archived at http://pastebin.com/eVFkk0A0 > This user had a specific DN that had a large non-DFS usage among > dfs.data.dirs, and very little DFS usage (which is computed against total > possible capacity). > Balancer apparently only looks at the usage, and ignores to consider that > non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a > DFS Usage report from DN is 8% only, its got a lot of free space to write > more blocks, when that isn't true as shown by the case of this user. It went > on scheduling writes to the DN to balance it out, but the DN simply can't > accept any more blocks as a result of its disks' state. > I think it would be better if we _computed_ the actual utilization based on > {{(100-(actual remaining space))/(capacity)}}, as opposed to the current > {{(dfs used)/(capacity)}}. Thoughts? > This isn't very critical, however, cause it is very rare to see DN space > being used for non DN data, but it does expose a valid bug. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12219) Javadoc for FSNamesystem#getMaxObjects is incorrect
[ https://issues.apache.org/jira/browse/HDFS-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201130#comment-16201130 ] Hadoop QA commented on HDFS-12219: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestMissingBlocksAlert | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestWebHDFSXAttr | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12219 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12879410/HDFS-12219.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7435b309c5fd 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8acdf5c | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21651/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test
[jira] [Commented] (HDFS-12566) BlockReceiver can leak FileInputStream
[ https://issues.apache.org/jira/browse/HDFS-12566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201117#comment-16201117 ] Hanisha Koneru commented on HDFS-12566: --- Hi [~lukmajercak], did you see this exception anywhere other than in TestDataTransferProtocol#testDataTransferProtocol? As far as I checked, the {{requestedChecksum}} cannot be null in normal conditions. Is there any other object that can potentially be null in BlockReceiver? > BlockReceiver can leak FileInputStream > -- > > Key: HDFS-12566 > URL: https://issues.apache.org/jira/browse/HDFS-12566 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Minor > Attachments: HDFS-12566.001.patch > > > BlockReceiver's constructor can leak file handles if it encounters null > pointer exceptions. One example of this is > TestDataTransferProtocol.testDataTransferProtocol: > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:263) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) > at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12504) Ozone: Improve SQLCLI performance
[ https://issues.apache.org/jira/browse/HDFS-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201118#comment-16201118 ] Chen Liang commented on HDFS-12504: --- Thanks [~yuanbo] for working on this! v001 patch looks pretty good to me. Just some minor comments: 1. {{void accept(T item) throws IOException;}}, rename accept to something like batchConsume? 2. "This class is used to batch operate kv" ==> "This class is used to batch kv operations" 3. Change the log "Insert to sql container db, for container" to something like "Insert to sql batch for container", and add some log to {{batchIterateStore}} such that we can see the progress from log. Also it would be ideal if we can have some simple benchmark results to see the performance improvement, I will be looking into this too. > Ozone: Improve SQLCLI performance > - > > Key: HDFS-12504 > URL: https://issues.apache.org/jira/browse/HDFS-12504 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: performance > Attachments: HDFS-12504-HDFS-7240.001.patch > > > In my test, my {{ksm.db}} has *3017660* entries with total size of *128mb*, > SQLCLI tool runs over *2 hours* but still not finish exporting the DB. This > is because it iterates each entry and inserts that to another sqllite DB > file, which is not efficient. We need to improve this to be running more > efficiently on large DB files. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12555) HDFS federation should support configure secondary directory
[ https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201110#comment-16201110 ] Bharat Viswanadham commented on HDFS-12555: --- Hi [~luoge123] In the property s.viewfs.mounttable.nsX.link.<> the link is virtual, so we can configure as below and achieve the samething currently right? fs.viewfs.mounttable.nsX.link./virtual hdfs://nn2:8020/user/hive Please let me know if i am missing something? > HDFS federation should support configure secondary directory > - > > Key: HDFS-12555 > URL: https://issues.apache.org/jira/browse/HDFS-12555 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation > Environment: 2.6.0-cdh5.10.0 >Reporter: luoge123 > Fix For: 2.6.0 > > Attachments: HDFS-12555.001.patch > > > HDFS federation support multiple namenodes horizontally scales the file > system namespace. As the amount of data grows, using a single group of > namenodes to manage a single directory, namenode still achieves performance > bottlenecks. In order to reduce the pressure of namenode, we can split out > the secondary directory, and manager it by a new namenode. This is > transparent for users. > For example, nn1 only manager the /user directory, when nn1 achieve > performance bottlenecks, we can split out /user/hive directory, and ues nn2 > to manager it. > That means core-site.xml should support as follows configuration. > >fs.viewfs.mounttable.nsX.link./user >hdfs://nn1:8020/user > > >fs.viewfs.mounttable.nsX.link./user/hive >hdfs://nn2:8020/user/hive > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12599) Remove Mockito dependency from DataNodeTestUtils
[ https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201068#comment-16201068 ] Sean Busbey commented on HDFS-12599: cherry-picked, ran through the altered tests, then pushed to branch-3.0. > Remove Mockito dependency from DataNodeTestUtils > > > Key: HDFS-12599 > URL: https://issues.apache.org/jira/browse/HDFS-12599 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HDFS-12599.v1.patch, HDFS-12599.v1.patch, > HDFS-12599.v1.patch > > > HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which > brought dependency on mockito back into DataNodeTestUtils > Downstream, this resulted in: > {code} > java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12599) Remove Mockito dependency from DataNodeTestUtils
[ https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HDFS-12599: --- Fix Version/s: 3.0.0 > Remove Mockito dependency from DataNodeTestUtils > > > Key: HDFS-12599 > URL: https://issues.apache.org/jira/browse/HDFS-12599 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HDFS-12599.v1.patch, HDFS-12599.v1.patch, > HDFS-12599.v1.patch > > > HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which > brought dependency on mockito back into DataNodeTestUtils > Downstream, this resulted in: > {code} > java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200998#comment-16200998 ] Hadoop QA commented on HDFS-7240: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 17s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 118 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools hadoop-client-modules/hadoop-client-minicluster hadoop-client-modules/hadoop-client-check-test-invariants . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 17s{color} | {color:red} root generated 160 new + 1115 unchanged - 156 fixed = 1275 total (was 1271) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 32s{color} | {color:orange} root: The patch generated 60 new + 872 unchanged - 16 fixed = 932 total (was 888) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 26s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 18s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project . hadoop-client-modules/hadoop-client-check-test-invariants hadoop-client-modules/hadoop-client-minicluster hadoop-tools hadoop-tools/hadoop-tools-dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m
[jira] [Updated] (HDFS-12475) Ozone : add documentation about Datanode http address
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12475: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Target Version/s: HDFS-7240 Tags: ozone Status: Resolved (was: Patch Available) [~ljain] Thanks for the contribution. I have committed this to the feature branch. > Ozone : add documentation about Datanode http address > - > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Lokesh Jain > Labels: ozoneDoc > Fix For: HDFS-7240 > > Attachments: HDFS-12475-HDFS-7240.001.patch > > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by Datanode http server, which is now shared by Ozone. > Changing this config means user should be using the value of this setting > rather than localhost:9864 as in doc. The value is controlled by the config > key {{dfs.datanode.http.address}}. We should document this information in > {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12475) Ozone : add documentation about Datanode http address
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12475: Summary: Ozone : add documentation about Datanode http address (was: Ozone : add document for using Datanode http address) > Ozone : add documentation about Datanode http address > - > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Lokesh Jain > Labels: ozoneDoc > Attachments: HDFS-12475-HDFS-7240.001.patch > > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by Datanode http server, which is now shared by Ozone. > Changing this config means user should be using the value of this setting > rather than localhost:9864 as in doc. The value is controlled by the config > key {{dfs.datanode.http.address}}. We should document this information in > {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12608) Ozone: Remove Warnings when building
[ https://issues.apache.org/jira/browse/HDFS-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200985#comment-16200985 ] Bharat Viswanadham commented on HDFS-12608: --- Thank You [~anu] for committing the changes. > Ozone: Remove Warnings when building > > > Key: HDFS-12608 > URL: https://issues.apache.org/jira/browse/HDFS-12608 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: HDFS-7240 > > Attachments: HDFS-12608-HDFS-7240.00.patch > > > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-ozone:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-project-info-reports-plugin is missing. @ > org.apache.hadoop:hadoop-ozone:[unknown-version], > /Users/aengineer/codereview/hadoop-tools/hadoop-ozone/pom.xml, line 36, > column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-dist:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-gpg-plugin is missing. @ line 133, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > [WARNING] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12558) Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM web ui
[ https://issues.apache.org/jira/browse/HDFS-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200982#comment-16200982 ] Chen Liang commented on HDFS-12558: --- Thanks [~elek]! The patch LGTM, could you please just re-submit the patch to see if Jenkins runs? > Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM > web ui > - > > Key: HDFS-12558 > URL: https://issues.apache.org/jira/browse/HDFS-12558 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12558-HDFS-7240.001.patch, after.png, before.png > > > In Ozone (SCM/KSM) web ui we have additional visualization if > rpc.metrics.percentiles.intervals are enabled. > But according to the feedbacks it's a little bit confusing what is it exactly. > I would like to improve it and clarify how does it work. > 1. I will to add a footnote about these are not rolling windows but just > display of the last fixed window. > 2. I would like to rearrange the layout. As the different windows are > independent, I would show them in different lines and group by the intervals > and not by RpcQueueTime/RpcProcessingTime. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200977#comment-16200977 ] Chen Liang commented on HDFS-12585: --- Thanks [~ajayydv] for working on this! Just one comment: In {{ConfServlet.java}}, seems {{propertyMap}} is only initialized in {{loadDescriptions()}} which is only called when {{loadDescriptionFromXml}} is true. So seems to me that if {{loadDescriptionFromXml}} is false, then {{propertyMap}} remains null, and this line {{propertyMap.get(key).setValue(config.get(key));}} will then fail, right? > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch, HDFS-12585-HDFS-7240.03.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11755
Wei-Chiu Chuang created HDFS-12641: -- Summary: Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11755 Key: HDFS-12641 URL: https://issues.apache.org/jira/browse/HDFS-12641 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.4 Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Our internal testing caught a regression in HDFS-11445 when we cherry picked the commit into CDH. Basically, it produces bogus missing file warnings. Further analysis revealed that the regression is actually fixed by HDFS-11755. Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was committed before HDFS-11445), the regression was never actually surfaced for Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4. I am filing this jira to raise more awareness, than simply backporting HDFS-11755 into branch-2.7. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12608) Ozone: Remove Warnings when building
[ https://issues.apache.org/jira/browse/HDFS-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200968#comment-16200968 ] Anu Engineer commented on HDFS-12608: - [~bharatviswa] Thanks for the contribution. I have committed this to the feature branch. > Ozone: Remove Warnings when building > > > Key: HDFS-12608 > URL: https://issues.apache.org/jira/browse/HDFS-12608 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: HDFS-7240 > > Attachments: HDFS-12608-HDFS-7240.00.patch > > > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-ozone:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-project-info-reports-plugin is missing. @ > org.apache.hadoop:hadoop-ozone:[unknown-version], > /Users/aengineer/codereview/hadoop-tools/hadoop-ozone/pom.xml, line 36, > column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-dist:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-gpg-plugin is missing. @ line 133, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > [WARNING] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12608) Ozone: Remove Warnings when building
[ https://issues.apache.org/jira/browse/HDFS-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12608: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Tags: ozone Status: Resolved (was: Patch Available) > Ozone: Remove Warnings when building > > > Key: HDFS-12608 > URL: https://issues.apache.org/jira/browse/HDFS-12608 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: HDFS-7240 > > Attachments: HDFS-12608-HDFS-7240.00.patch > > > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-ozone:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-project-info-reports-plugin is missing. @ > org.apache.hadoop:hadoop-ozone:[unknown-version], > /Users/aengineer/codereview/hadoop-tools/hadoop-ozone/pom.xml, line 36, > column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-dist:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-gpg-plugin is missing. @ line 133, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > [WARNING] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445
[ https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-12641: --- Summary: Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445 (was: Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11755) > Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445 > - > > Key: HDFS-12641 > URL: https://issues.apache.org/jira/browse/HDFS-12641 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.4 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > > Our internal testing caught a regression in HDFS-11445 when we cherry picked > the commit into CDH. Basically, it produces bogus missing file warnings. > Further analysis revealed that the regression is actually fixed by HDFS-11755. > Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was > committed before HDFS-11445), the regression was never actually surfaced for > Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no > HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4. > I am filing this jira to raise more awareness, than simply backporting > HDFS-11755 into branch-2.7. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12630) Rolling restart can create inconsistency between blockMap and corrupt replicas map
[ https://issues.apache.org/jira/browse/HDFS-12630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200965#comment-16200965 ] Andre Araujo commented on HDFS-12630: - Thanks, [~shahrs87]. HDFS-11797 and HDFS-11445 appear to be similar but it doesn't seem they are exactly the same case as this instance. The cluster where we found this issue is running a Cloudera CDH 5.12.1 (version hadoop-2.6.0-cdh5.12.1), which [already has the fix for HDFS-11445|http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.12.1.CHANGES.txt]. I tried to reproduce the issue by following the steps in HDFS-11445 but was not able to reproduce it. > Rolling restart can create inconsistency between blockMap and corrupt > replicas map > -- > > Key: HDFS-12630 > URL: https://issues.apache.org/jira/browse/HDFS-12630 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Andre Araujo > > After a NN rolling restart several HDFS files started showing block problems. > Running FSCK for one of the files or for the directory that contained it > would complete with a FAILED message but without any details of the failure. > The NameNode log showed the following: > {code} > 2017-10-10 16:58:32,147 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: > FSCK started by hdfs (auth:KERBEROS_SSL) from /10.92.128.4 for path > /user/prod/data/file_20171010092201.csv at Tue Oct 10 16:58:32 PDT 2017 > 2017-10-10 16:58:32,147 WARN > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent > number of corrupt replicas for blk_1941920008_1133195379 blockMap has 1 but > corrupt replicas map has 2 > 2017-10-10 16:58:32,147 WARN org.apache.hadoop.hdfs.server.namenode.NameNode: > Fsck on path '/user/prod/data/file_20171010092201.csv' FAILED > java.lang.ArrayIndexOutOfBoundsException > {code} > After triggering a full block report for all the DNs the problem went away. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12620) Backporting HDFS-10467 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12620: --- Attachment: HDFS-12620-branch-2.004.patch > Backporting HDFS-10467 to branch-2 > -- > > Key: HDFS-12620 > URL: https://issues.apache.org/jira/browse/HDFS-12620 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: HDFS-10467-branch-2.001.patch, > HDFS-10467-branch-2.002.patch, HDFS-10467-branch-2.003.patch, > HDFS-10467-branch-2.patch, HDFS-12620-branch-2.000.patch, > HDFS-12620-branch-2.004.patch > > > When backporting HDFS-10467, there are a few things that changed: > * {{bin\hdfs}} > * {{ClientProtocol}} > * Java 7 not supporting referencing functions > * {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is > {{org.mortbay.util.ajax.JSON}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11755) Underconstruction blocks can be considered missing
[ https://issues.apache.org/jira/browse/HDFS-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200946#comment-16200946 ] Wei-Chiu Chuang commented on HDFS-11755: As discussed in HDFS-11445, a regression caused by HDFS-11445 is fixed by HDFS-11755. I'd like to backport HDFS-11755 into branch-2.7 as a result. > Underconstruction blocks can be considered missing > -- > > Key: HDFS-11755 > URL: https://issues.apache.org/jira/browse/HDFS-11755 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha2, 2.8.1 >Reporter: Nathan Roberts >Assignee: Nathan Roberts > Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2 > > Attachments: HDFS-11755-branch-2.002.patch, > HDFS-11755-branch-2.8.002.patch, HDFS-11755.001.patch, HDFS-11755.002.patch > > > Following sequence of events can lead to a block underconstruction being > considered missing. > - pipeline of 3 DNs, DN1->DN2->DN3 > - DN3 has a failing disk so some updates take a long time > - Client writes entire block and is waiting for final ack > - DN1, DN2 and DN3 have all received the block > - DN1 is waiting for ACK from DN2 who is waiting for ACK from DN3 > - DN3 is having trouble finalizing the block due to the failing drive. It > does eventually succeed but it is VERY slow at doing so. > - DN2 times out waiting for DN3 and tears down its pieces of the pipeline, so > DN1 notices and does the same. Neither DN1 nor DN2 finalized the block. > - DN3 finally sends an IBR to the NN indicating the block has been received. > - Drive containing the block on DN3 fails enough that the DN takes it offline > and notifies NN of failed volume > - NN removes DN3's replica from the triplets and then declares the block > missing because there are no other replicas > Seems like we shouldn't consider uncompleted blocks for replication. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12608) Ozone: Remove Warnings when building
[ https://issues.apache.org/jira/browse/HDFS-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12608: Summary: Ozone: Remove Warnings when building (was: Remove Warnings when building ozone) > Ozone: Remove Warnings when building > > > Key: HDFS-12608 > URL: https://issues.apache.org/jira/browse/HDFS-12608 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12608-HDFS-7240.00.patch > > > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-ozone:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-project-info-reports-plugin is missing. @ > org.apache.hadoop:hadoop-ozone:[unknown-version], > /Users/aengineer/codereview/hadoop-tools/hadoop-ozone/pom.xml, line 36, > column 15 > [WARNING] > [WARNING] Some problems were encountered while building the effective model > for org.apache.hadoop:hadoop-dist:jar:3.1.0-SNAPSHOT > [WARNING] 'build.plugins.plugin.version' for > org.apache.maven.plugins:maven-gpg-plugin is missing. @ line 133, column 15 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > [WARNING] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200943#comment-16200943 ] Bharat Viswanadham commented on HDFS-12553: --- Addressed review comments and checkstyle issues. > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12553: -- Attachment: HDFS-12553.11.patch > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol
[ https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200918#comment-16200918 ] Nandakumar commented on HDFS-12549: --- Uploaded patch on top of latest changes. > Ozone: OzoneClient: Support for REST protocol > - > > Key: HDFS-12549 > URL: https://issues.apache.org/jira/browse/HDFS-12549 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Attachments: HDFS-12549-HDFS-7240.000.patch, > HDFS-12549-HDFS-7240.001.patch > > > Support for REST protocol in OzoneClient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol
[ https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12549: -- Attachment: HDFS-12549-HDFS-7240.001.patch > Ozone: OzoneClient: Support for REST protocol > - > > Key: HDFS-12549 > URL: https://issues.apache.org/jira/browse/HDFS-12549 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Attachments: HDFS-12549-HDFS-7240.000.patch, > HDFS-12549-HDFS-7240.001.patch > > > Support for REST protocol in OzoneClient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200897#comment-16200897 ] Hudson commented on HDFS-12542: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13074 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13074/]) HDFS-12542. Update javadoc and documentation for listStatus. Contributed (arp: rev 8acdf5c2742c081f3e0e96e13eb940a39964a58f) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12542.01.patch, HDFS-12542.02.patch, > HDFS-12542.03.patch > > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Fix typo in DFSAdmin command output
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200896#comment-16200896 ] Hudson commented on HDFS-12627: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13074 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13074/]) HDFS-12627. Fix typo in DFSAdmin command output. Contributed by Ajay (arp: rev bb0a742aac1dc03d08beff3cb4b7b04b8036fdcc) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml > Fix typo in DFSAdmin command output > --- > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch, > HDFS-12627.03.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200885#comment-16200885 ] Anu Engineer commented on HDFS-12519: - [~nandakumar131] Thank you for the update. +1 v003 patch, pending Jenkins. > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch, HDFS-12519-HDFS-7240.002.patch, > HDFS-12519-HDFS-7240.003.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200875#comment-16200875 ] Nandakumar commented on HDFS-12519: --- Thanks [~vagarychen] & [~anu] for the review. Comments are addressed in patch v003. > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch, HDFS-12519-HDFS-7240.002.patch, > HDFS-12519-HDFS-7240.003.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12519: -- Attachment: HDFS-12519-HDFS-7240.003.patch > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch, HDFS-12519-HDFS-7240.002.patch, > HDFS-12519-HDFS-7240.003.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12219) Javadoc for FSNamesystem#getMaxObjects is incorrect
[ https://issues.apache.org/jira/browse/HDFS-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200868#comment-16200868 ] Hanisha Koneru commented on HDFS-12219: --- Thanks for the fix [~xkrogen]. LGTM. +1 (non-binding). > Javadoc for FSNamesystem#getMaxObjects is incorrect > --- > > Key: HDFS-12219 > URL: https://issues.apache.org/jira/browse/HDFS-12219 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Trivial > Attachments: HDFS-12219.000.patch > > > The Javadoc states that this represents the total number of objects in the > system, but it really represents the maximum allowed number of objects (as > correctly stated on the Javadoc for {{FSNamesystemMBean#getMaxObjects()}}). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12212) Options.Rename.To_TRASH is considered even when Options.Rename.NONE is specified
[ https://issues.apache.org/jira/browse/HDFS-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200859#comment-16200859 ] Hanisha Koneru commented on HDFS-12212: --- Thanks for the fix [~vinayrpet]. LGTM. +1 (non-binding). > Options.Rename.To_TRASH is considered even when Options.Rename.NONE is > specified > > > Key: HDFS-12212 > URL: https://issues.apache.org/jira/browse/HDFS-12212 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HDFS-12212-01.patch > > > HDFS-8312 introduced {{Options.Rename.TO_TRASH}} to differentiate the > movement to trash and other renames for permission checks. > When Options.Rename.NONE is passed also TO_TRASH is considered for rename and > wrong permissions are checked for rename. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12640) libhdfs++: automatic CI tests are getting stuck in test_libhdfs_mini_stress_hdfspp_test_shim_static
James Clampffer created HDFS-12640: -- Summary: libhdfs++: automatic CI tests are getting stuck in test_libhdfs_mini_stress_hdfspp_test_shim_static Key: HDFS-12640 URL: https://issues.apache.org/jira/browse/HDFS-12640 Project: Hadoop HDFS Issue Type: Sub-task Reporter: James Clampffer Assignee: James Clampffer All of the automated tests seem to get stuck, or at least stop generating useful output, in test_libhdfs_mini_stress_hdfspp_test_shim_static. Not able to reproduce the issue locally in docker. Right now this is blocking a few patches, and not having those patches committed is slowing down work on other parts of the library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12200) Optimize CachedDNSToSwitchMapping to avoid 100% cpu utilization
[ https://issues.apache.org/jira/browse/HDFS-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200851#comment-16200851 ] Hanisha Koneru edited comment on HDFS-12200 at 10/11/17 7:57 PM: - Hi [~yangjiandan] If I understand correctly, you want to add an option to disable rack lookup for uncached hosts, right? This patch would optimize rack lookup for an unusual architecture. Generally, HDFS and YARN are deployed on the same machines. As such, I think the use case might be very limited. Also, can you please update the Jira title to reflect that this is for adding an option to disable rack lookup for uncached hosts. Currently, when a DN registers, the DNS to Switch mapping is resolved during the registration process itself. With this change and with resolve-non-cached-host set to false, the rack resolution for new DN will be skipped during registration. This might cause the rack new DN's rack resolution to be incorrectly cached as default in the following 2 cases: - a new DN is added, rack mapping script is updated, and the DN registers before refreshNodes is called: The rack will be resolved to default during DN registration. And since it has already been resolved, it would not be updated during refreshNodes. - a new DN is added, refreshNodes is called and then the rack mapping script is updated: In this case too, the mapping for the new DN would be updated with default instead of the correct mapping. was (Author: hanishakoneru): Hi [~yangjiandan]] If I understand correctly, you want to add an option to disable rack lookup for uncached hosts, right? This patch would optimize rack lookup for an unusual architecture. Generally HDFS and YARN are deployed on the same machines. As such, I think the use case might be very limited. Also, can you please update the Jira title to reflect that this is for adding an option to disable rack lookup for uncached hosts. Currently, when a DN registers, the DNS to Switch mapping is resolved during the registration process itself. With this change and with resolve-non-cached-host set to false, the rack resolution for new DN will be skipped during registration. This might cause the rack new DN's rack resolution to be incorrectly cached as default in the following 2 cases: - a new DN is added, rack mapping script is updated, and the DN registers before refreshNodes is called: The rack will be resolved to default during DN registration. And since it has already been resolved, it would not be updated during refreshNodes. - a new DN is added, refreshNodes is called and then the rack mapping script is updated: In this case too, the mapping for the new DN would be updated with default instead of the correct mapping. > Optimize CachedDNSToSwitchMapping to avoid 100% cpu utilization > --- > > Key: HDFS-12200 > URL: https://issues.apache.org/jira/browse/HDFS-12200 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Jiandan Yang >Assignee: Jiandan Yang > Attachments: HDFS-12200-001.patch, HDFS-12200-002.patch, > HDFS-12200-003.patch, cpu_ utilization.png, nn_thread_num.png > > > 1. Background : > Our hadoop cluster is disaggregated storage and compute, HDFS is deployed to > 600+ machines, YARN is deployed to another machine pool. > We found that sometimes NameNode cpu utilization rate of 90% or even 100%. > The most serious is cpu utilization rate of 100% for a long time result in > writing journalNode timeout, eventually leading to NameNode hang up. The > reason is offline tasks running in a few hundred servers access HDFS at the > same time, NameNode resolve rack of client machine, started several hundreds > to two thousand sub-process. > {code:java} > "process reaper"#10864 daemon prio=10 os_prio=0 tid=0x7fe270a31800 > nid=0x38d93 runnable [0x7fcdc36fc000] >java.lang.Thread.State: RUNNABLE > at java.lang.UNIXProcess.waitForProcessExit(Native Method) > at java.lang.UNIXProcess.lambda$initStreams$4(UNIXProcess.java:301) > at java.lang.UNIXProcess$$Lambda$7/1447689627.run(Unknown Source) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > at java.lang.Thread.run(Thread.java:834 > {code} > Our configuration as follows: > {code:java} > net.topology.node.switch.mapping.impl = ScriptBasedMapping, > net.topology.script.file.name = 'a python script' > {code} > 2. Optimization > In order to solve these two problems, we have optimized the > CachedDNSToSwitchMapping > (1) Added the DataNode IP list to the file of dfs.hosts configured. when > NameNode starts it preloads
[jira] [Commented] (HDFS-12200) Optimize CachedDNSToSwitchMapping to avoid 100% cpu utilization
[ https://issues.apache.org/jira/browse/HDFS-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200851#comment-16200851 ] Hanisha Koneru commented on HDFS-12200: --- Hi [~yangjiandan]] If I understand correctly, you want to add an option to disable rack lookup for uncached hosts, right? This patch would optimize rack lookup for an unusual architecture. Generally HDFS and YARN are deployed on the same machines. As such, I think the use case might be very limited. Also, can you please update the Jira title to reflect that this is for adding an option to disable rack lookup for uncached hosts. Currently, when a DN registers, the DNS to Switch mapping is resolved during the registration process itself. With this change and with resolve-non-cached-host set to false, the rack resolution for new DN will be skipped during registration. This might cause the rack new DN's rack resolution to be incorrectly cached as default in the following 2 cases: - a new DN is added, rack mapping script is updated, and the DN registers before refreshNodes is called: The rack will be resolved to default during DN registration. And since it has already been resolved, it would not be updated during refreshNodes. - a new DN is added, refreshNodes is called and then the rack mapping script is updated: In this case too, the mapping for the new DN would be updated with default instead of the correct mapping. > Optimize CachedDNSToSwitchMapping to avoid 100% cpu utilization > --- > > Key: HDFS-12200 > URL: https://issues.apache.org/jira/browse/HDFS-12200 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Jiandan Yang >Assignee: Jiandan Yang > Attachments: HDFS-12200-001.patch, HDFS-12200-002.patch, > HDFS-12200-003.patch, cpu_ utilization.png, nn_thread_num.png > > > 1. Background : > Our hadoop cluster is disaggregated storage and compute, HDFS is deployed to > 600+ machines, YARN is deployed to another machine pool. > We found that sometimes NameNode cpu utilization rate of 90% or even 100%. > The most serious is cpu utilization rate of 100% for a long time result in > writing journalNode timeout, eventually leading to NameNode hang up. The > reason is offline tasks running in a few hundred servers access HDFS at the > same time, NameNode resolve rack of client machine, started several hundreds > to two thousand sub-process. > {code:java} > "process reaper"#10864 daemon prio=10 os_prio=0 tid=0x7fe270a31800 > nid=0x38d93 runnable [0x7fcdc36fc000] >java.lang.Thread.State: RUNNABLE > at java.lang.UNIXProcess.waitForProcessExit(Native Method) > at java.lang.UNIXProcess.lambda$initStreams$4(UNIXProcess.java:301) > at java.lang.UNIXProcess$$Lambda$7/1447689627.run(Unknown Source) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > at java.lang.Thread.run(Thread.java:834 > {code} > Our configuration as follows: > {code:java} > net.topology.node.switch.mapping.impl = ScriptBasedMapping, > net.topology.script.file.name = 'a python script' > {code} > 2. Optimization > In order to solve these two problems, we have optimized the > CachedDNSToSwitchMapping > (1) Added the DataNode IP list to the file of dfs.hosts configured. when > NameNode starts it preloads DataNode rack information to the cache, get a > batch of racks of hosts when running script once (the corresponding > configuration is net.topology.script.number,the default value of 100) > (2) Step (1) has ensured that the cache has all the DataNodes’ rack, so if > the cache did not hit, then the host must be a client machine, then directly > return /default-rack, > (3) Each time you add new DataNodes you need to add the new DataNodes’ IP > address to the file specified by dfs.hosts, and then run command of bin/hdfs > dfsadmin -refreshNodes, it will put the newly added DataNodes’ rack into cache > (4) Add new configuration items dfs.namenode.topology.resolve-non-cache-host, > the value is false to open the above function, and the value is true to turn > off the above functions, default value is true to keep compatibility -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12639) BPOfferService lock may stall all service actors
[ https://issues.apache.org/jira/browse/HDFS-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200832#comment-16200832 ] Hanisha Koneru commented on HDFS-12639: --- Hi [~daryn], are you working on this Jira? If not, I would like to take it up. > BPOfferService lock may stall all service actors > > > Key: HDFS-12639 > URL: https://issues.apache.org/jira/browse/HDFS-12639 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp > > {{BPOfferService}} manages {{BPServiceActor}} instances for the active and > standby. It uses a RW lock to primarily protect registration information > while determining the active/standby from heartbeats. > Unfortunately the write lock is held during command processing. If an actor > is experiencing high latency processing commands, the other actor will > neither be able to register (blocked in createRegistration, setNamespaceInfo, > verifyAndSetNamespaceInfo) nor process heartbeats (blocked in > updateActorStatesFromHeartbeat). > The worst case scenario for processing commands while holding the lock is > re-registration. The actor will loop, catching and logging exceptions, > leaving the other actor blocked for an non-deterministic (possibly infinite) > amount of time. > The lock must not be held during command processing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12639) BPOfferService lock may stall all service actors
[ https://issues.apache.org/jira/browse/HDFS-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HDFS-12639: - Assignee: (was: Hanisha Koneru) > BPOfferService lock may stall all service actors > > > Key: HDFS-12639 > URL: https://issues.apache.org/jira/browse/HDFS-12639 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp > > {{BPOfferService}} manages {{BPServiceActor}} instances for the active and > standby. It uses a RW lock to primarily protect registration information > while determining the active/standby from heartbeats. > Unfortunately the write lock is held during command processing. If an actor > is experiencing high latency processing commands, the other actor will > neither be able to register (blocked in createRegistration, setNamespaceInfo, > verifyAndSetNamespaceInfo) nor process heartbeats (blocked in > updateActorStatesFromHeartbeat). > The worst case scenario for processing commands while holding the lock is > re-registration. The actor will loop, catching and logging exceptions, > leaving the other actor blocked for an non-deterministic (possibly infinite) > amount of time. > The lock must not be held during command processing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12639) BPOfferService lock may stall all service actors
[ https://issues.apache.org/jira/browse/HDFS-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HDFS-12639: - Assignee: Hanisha Koneru > BPOfferService lock may stall all service actors > > > Key: HDFS-12639 > URL: https://issues.apache.org/jira/browse/HDFS-12639 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Hanisha Koneru > > {{BPOfferService}} manages {{BPServiceActor}} instances for the active and > standby. It uses a RW lock to primarily protect registration information > while determining the active/standby from heartbeats. > Unfortunately the write lock is held during command processing. If an actor > is experiencing high latency processing commands, the other actor will > neither be able to register (blocked in createRegistration, setNamespaceInfo, > verifyAndSetNamespaceInfo) nor process heartbeats (blocked in > updateActorStatesFromHeartbeat). > The worst case scenario for processing commands while holding the lock is > re-registration. The actor will loop, catching and logging exceptions, > leaving the other actor blocked for an non-deterministic (possibly infinite) > amount of time. > The lock must not be held during command processing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200830#comment-16200830 ] Ajay Kumar commented on HDFS-12542: --- [~arpitagarwal] thanks for review and commit. > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12542.01.patch, HDFS-12542.02.patch, > HDFS-12542.03.patch > > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Fix typo in DFSAdmin command output
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200829#comment-16200829 ] Ajay Kumar commented on HDFS-12627: --- [~arpitagarwal] thanks for review and commit. > Fix typo in DFSAdmin command output > --- > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch, > HDFS-12627.03.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12599) Remove Mockito dependency from DataNodeTestUtils
[ https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200826#comment-16200826 ] Arpit Agarwal commented on HDFS-12599: -- No objections from me. > Remove Mockito dependency from DataNodeTestUtils > > > Key: HDFS-12599 > URL: https://issues.apache.org/jira/browse/HDFS-12599 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Fix For: 3.1.0 > > Attachments: HDFS-12599.v1.patch, HDFS-12599.v1.patch, > HDFS-12599.v1.patch > > > HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which > brought dependency on mockito back into DataNodeTestUtils > Downstream, this resulted in: > {code} > java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12542: - Component/s: documentation > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12542.01.patch, HDFS-12542.02.patch, > HDFS-12542.03.patch > > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12542: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) I've committed this to trunk. Thanks for the contribution [~ajayydv]. The UT failures are unrelated. > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12542.01.patch, HDFS-12542.02.patch, > HDFS-12542.03.patch > > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12627) Fix typo in DFSAdmin command output
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12627: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) I've committed this. Thanks for the contribution [~ajayydv]. The remaining UT failures are unrelated. > Fix typo in DFSAdmin command output > --- > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch, > HDFS-12627.03.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200813#comment-16200813 ] Hadoop QA commented on HDFS-12542: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-httpfs generated 0 new + 51 unchanged - 1 fixed = 51 total (was 52) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 41s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}165m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker |
[jira] [Commented] (HDFS-12599) Remove Mockito dependency from DataNodeTestUtils
[ https://issues.apache.org/jira/browse/HDFS-12599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200815#comment-16200815 ] Sean Busbey commented on HDFS-12599: any objections to including this in branch-3.0? > Remove Mockito dependency from DataNodeTestUtils > > > Key: HDFS-12599 > URL: https://issues.apache.org/jira/browse/HDFS-12599 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-beta1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Fix For: 3.1.0 > > Attachments: HDFS-12599.v1.patch, HDFS-12599.v1.patch, > HDFS-12599.v1.patch > > > HDFS-11164 introduced {{DataNodeTestUtils.mockDatanodeBlkPinning}} which > brought dependency on mockito back into DataNodeTestUtils > Downstream, this resulted in: > {code} > java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:769) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:661) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1075) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:953) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200811#comment-16200811 ] Arpit Agarwal commented on HDFS-12627: -- +1, I will commit this shortly. > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch, > HDFS-12627.03.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12627) Fix typo in DFSAdmin command output
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12627: - Summary: Fix typo in DFSAdmin command output (was: Typo in DFSAdmin) > Fix typo in DFSAdmin command output > --- > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch, > HDFS-12627.03.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12639) BPOfferService lock may stall all service actors
Daryn Sharp created HDFS-12639: -- Summary: BPOfferService lock may stall all service actors Key: HDFS-12639 URL: https://issues.apache.org/jira/browse/HDFS-12639 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.8.0 Reporter: Daryn Sharp {{BPOfferService}} manages {{BPServiceActor}} instances for the active and standby. It uses a RW lock to primarily protect registration information while determining the active/standby from heartbeats. Unfortunately the write lock is held during command processing. If an actor is experiencing high latency processing commands, the other actor will neither be able to register (blocked in createRegistration, setNamespaceInfo, verifyAndSetNamespaceInfo) nor process heartbeats (blocked in updateActorStatesFromHeartbeat). The worst case scenario for processing commands while holding the lock is re-registration. The actor will loop, catching and logging exceptions, leaving the other actor blocked for an non-deterministic (possibly infinite) amount of time. The lock must not be held during command processing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200804#comment-16200804 ] Hadoop QA commented on HDFS-12627: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 198 unchanged - 4 fixed = 198 total (was 202) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}150m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12627 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891523/HDFS-12627.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 66b64eab98d7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ebb34c7 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21649/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |