[ https://issues.apache.org/jira/browse/HDDS-1753?focusedWorklogId=300167&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-300167 ]
ASF GitHub Bot logged work on HDDS-1753: ---------------------------------------- Author: ASF GitHub Bot Created on: 23/Aug/19 08:57 Start Date: 23/Aug/19 08:57 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #1318: HDDS-1753. Datanode unable to find chunk while replication data using ratis. URL: https://github.com/apache/hadoop/pull/1318#issuecomment-524234061 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |:----:|----------:|--------:|:--------| | 0 | reexec | 45 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 588 | trunk passed | | +1 | compile | 359 | trunk passed | | +1 | checkstyle | 67 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 850 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 150 | trunk passed | | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 606 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 30 | Maven dependency ordering for patch | | -1 | mvninstall | 292 | hadoop-ozone in the patch failed. | | -1 | compile | 231 | hadoop-ozone in the patch failed. | | -1 | javac | 231 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 33 | hadoop-hdds: The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) | | -0 | checkstyle | 35 | hadoop-ozone: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 652 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 152 | the patch passed | | -1 | findbugs | 356 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 290 | hadoop-hdds in the patch passed. | | -1 | unit | 324 | hadoop-ozone in the patch failed. | | +1 | asflicense | 38 | The patch does not generate ASF License warnings. | | | | 5676 | | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1318 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0afc6ecbfc32 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / bd7baea | | Default Java | 1.8.0_212 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/patch-compile-hadoop-ozone.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/diff-checkstyle-hadoop-hdds.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/diff-checkstyle-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/patch-findbugs-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/testReport/ | | Max. process+thread count | 1345 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1318/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 300167) Time Spent: 3h 40m (was: 3.5h) > Datanode unable to find chunk while replication data using ratis. > ----------------------------------------------------------------- > > Key: HDDS-1753 > URL: https://issues.apache.org/jira/browse/HDDS-1753 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode > Affects Versions: 0.4.0 > Reporter: Mukul Kumar Singh > Assignee: Shashikant Banerjee > Priority: Major > Labels: MiniOzoneChaosCluster, pull-request-available > Attachments: HDDS-1753.000.patch > > Time Spent: 3h 40m > Remaining Estimate: 0h > > Leader datanode is unable to read chunk from the datanode while replicating > data from leader to follower. > Please note that deletion of keys is also happening while the data is being > replicated. > {code} > 2019-07-02 19:39:22,604 INFO impl.RaftServerImpl > (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - > 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. > Reply:76a3eb0f-d7cd-477b-8973-db1 > 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#70:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782 > 2019-07-02 19:39:22,605 ERROR impl.ChunkManagerImpl > (ChunkUtils.java:readData(161)) - Unable to find the chunk file. chunk info : > ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3 > -4d64-93d8-fa2ebafee933_chunk_1, offset=0, len=2048} > 2019-07-02 19:39:22,605 INFO impl.RaftServerImpl > (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - > 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot > (9770) already h > as the append entries (first index: 1) > 2019-07-02 19:39:22,605 INFO impl.RaftServerImpl > (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - > 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. > Reply:76a3eb0f-d7cd-477b-8973-db1 > 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#71:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782 > 2019-07-02 19:39:22,605 INFO keyvalue.KeyValueHandler > (ContainerUtils.java:logAndReturnError(146)) - Operation: ReadChunk : Trace > ID: 4216d461a4679e17:4216d461a4679e17:0:0 : Message: Unable to find the c > hunk file. chunk info > ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1, > offset=0, len=2048} : Result: UNABLE_TO_FIND_CHUNK > 2019-07-02 19:39:22,605 INFO impl.RaftServerImpl > (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - > 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot > (9770) already h > as the append entries (first index: 2) > 2019-07-02 19:39:22,606 INFO impl.RaftServerImpl > (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - > 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. > Reply:76a3eb0f-d7cd-477b-8973-db1 > 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#72:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782 > 19:39:22.606 [pool-195-thread-19] ERROR DNAudit - user=null | ip=null | > op=READ_CHUNK {blockData=conID: 3 locID: 102372189549953034 bcsId: 0} | > ret=FAILURE > java.lang.Exception: Unable to find the chunk file. chunk info > ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1, > offset=0, len=2048} > at > org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:320) > ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148) > ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:346) > ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:476) > ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$getCachedStateMachineData$2(ContainerStateMachine.java:495) > ~[hadoop-hdds-container-service-0.5.0-SN > APSHOT.jar:?] > at > com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767) > ~[guava-11.0.2.jar:?] > at > com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) > ~[guava-11.0.2.jar:?] > at > com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) > ~[guava-11.0.2.jar:?] > at > com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) > ~[guava-11.0.2.jar:?] > at > com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) > ~[guava-11.0.2.jar:?] > at com.google.common.cache.LocalCache.get(LocalCache.java:3965) > ~[guava-11.0.2.jar:?] > at > com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764) > ~[guava-11.0.2.jar:?] > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.getCachedStateMachineData(ContainerStateMachine.java:494) > ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.ja > r:?] > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$4(ContainerStateMachine.java:542) > ~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?] > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) > [?:1.8.0_171] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_171] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_171] > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org