[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=758345&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-758345 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 19/Apr/22 08:32 Start Date: 19/Apr/22 08:32 Worklog Time Spent: 10m Work Description: liubingxing commented on PR #4167: URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1102290130 @tasanuma Thanks for the review and merged. I found another bug related to EC decoding in [HDFS-16538](http://https//issues.apache.org/jira/browse/HDFS-16538) , Please take a look. Thanks you very much. Issue Time Tracking --- Worklog Id: (was: 758345) Time Spent: 1h 20m (was: 1h 10m) > EC decoding failed due to not enough valid inputs > -- > > Key: HDFS-16538 > URL: https://issues.apache.org/jira/browse/HDFS-16538 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Reporter: qinyuren >Assignee: qinyuren >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.4, 3.3.4 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently, we found this error if the #StripeReader.readStripe() have more > than one block read failed. > We use the EC policy ec(6+3) in our cluster. > {code:java} > Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid > inputs are provided, not recoverable > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119) > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170) > at > org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462) > at > org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94) > at > org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > {code} > > {code:java} > while (!futures.isEmpty()) { > try { > StripingChunkReadResult r = StripedBlockUtil > .getNextCompletedStripedRead(service, futures, 0); > dfsStripedInputStream.updateReadStats(r.getReadStats()); > DFSClient.LOG.debug("Read task returned: {}, for stripe {}", > r, alignedStripe); > StripingChunk returnedChunk = alignedStripe.chunks[r.index]; > Preconditions.checkNotNull(returnedChunk); > Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING); > if (r.state == StripingChunkReadResult.SUCCESSFUL) { > returnedChunk.state = StripingChunk.FETCHED; > alignedStripe.fetchedChunksNum++; > updateState4SuccessRead(r); > if (alignedStripe.fetchedChunksNum == dataBlkNum) { > clearFutures(); > break; > } > } else { > returnedChunk.state = StripingChunk.MISSING; > // close the corresponding reader > dfsStripedInputStream.closeReader(readerInfos[r.index]); > final int missing = alignedStripe.missingChunksNum; > alignedStripe.missingChunksNum++; > checkMissingBlocks(); > readDataForDecoding(); > readParityChunks(alignedStripe.missingChunksNum - missing); > } {code} > This error can be trigger by #StatefulStripeReader.decode. > The reason is that: > # If there are more than one *data block* read failed, the > #readDataForDecoding will be called multiple times; > # The *decodeInputs array* will be initialized repeatedly. > # The *parity* *data* in *decodeInputs array* which filled by > #readParityChunks previously will be set to null. > > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=758274&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-758274 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 19/Apr/22 04:37 Start Date: 19/Apr/22 04:37 Worklog Time Spent: 10m Work Description: tasanuma merged PR #4167: URL: https://github.com/apache/hadoop/pull/4167 Issue Time Tracking --- Worklog Id: (was: 758274) Time Spent: 1h 10m (was: 1h) > EC decoding failed due to not enough valid inputs > -- > > Key: HDFS-16538 > URL: https://issues.apache.org/jira/browse/HDFS-16538 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Reporter: qinyuren >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Currently, we found this error if the #StripeReader.readStripe() have more > than one block read failed. > We use the EC policy ec(6+3) in our cluster. > {code:java} > Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid > inputs are provided, not recoverable > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119) > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170) > at > org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462) > at > org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94) > at > org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > {code} > > {code:java} > while (!futures.isEmpty()) { > try { > StripingChunkReadResult r = StripedBlockUtil > .getNextCompletedStripedRead(service, futures, 0); > dfsStripedInputStream.updateReadStats(r.getReadStats()); > DFSClient.LOG.debug("Read task returned: {}, for stripe {}", > r, alignedStripe); > StripingChunk returnedChunk = alignedStripe.chunks[r.index]; > Preconditions.checkNotNull(returnedChunk); > Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING); > if (r.state == StripingChunkReadResult.SUCCESSFUL) { > returnedChunk.state = StripingChunk.FETCHED; > alignedStripe.fetchedChunksNum++; > updateState4SuccessRead(r); > if (alignedStripe.fetchedChunksNum == dataBlkNum) { > clearFutures(); > break; > } > } else { > returnedChunk.state = StripingChunk.MISSING; > // close the corresponding reader > dfsStripedInputStream.closeReader(readerInfos[r.index]); > final int missing = alignedStripe.missingChunksNum; > alignedStripe.missingChunksNum++; > checkMissingBlocks(); > readDataForDecoding(); > readParityChunks(alignedStripe.missingChunksNum - missing); > } {code} > This error can be trigger by #StatefulStripeReader.decode. > The reason is that: > # If there are more than one *data block* read failed, the > #readDataForDecoding will be called multiple times; > # The *decodeInputs array* will be initialized repeatedly. > # The *parity* *data* in *decodeInputs array* which filled by > #readParityChunks previously will be set to null. > > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=757118&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757118 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 14/Apr/22 17:28 Start Date: 14/Apr/22 17:28 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4167: URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1099446513 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 48s | | trunk passed | | +1 :green_heart: | compile | 6m 37s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 6m 16s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 43s | | trunk passed | | +1 :green_heart: | javadoc | 2m 0s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 17s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 15s | | the patch passed | | +1 :green_heart: | compile | 6m 46s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 6m 46s | | the patch passed | | +1 :green_heart: | compile | 6m 9s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 6m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 18s | | the patch passed | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 7s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 27s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 234m 43s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 389m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4167 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux a7b6b3da85bb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 22359a90c8e8cd1dce2291ba8b69ca0a25161872 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/testReport/ | | Max. process+thread count | 3058 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/had
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=756948&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756948 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 14/Apr/22 12:32 Start Date: 14/Apr/22 12:32 Worklog Time Spent: 10m Work Description: liubingxing commented on PR #4167: URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1099137583 @tasanuma Please take a look at this. Issue Time Tracking --- Worklog Id: (was: 756948) Time Spent: 50m (was: 40m) > EC decoding failed due to not enough valid inputs > -- > > Key: HDFS-16538 > URL: https://issues.apache.org/jira/browse/HDFS-16538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: qinyuren >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Currently, we found this error if the #StripeReader.readStripe() have more > than one block read failed. > We use the EC policy ec(6+3) in our cluster. > {code:java} > Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid > inputs are provided, not recoverable > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119) > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170) > at > org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462) > at > org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94) > at > org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > {code} > > {code:java} > while (!futures.isEmpty()) { > try { > StripingChunkReadResult r = StripedBlockUtil > .getNextCompletedStripedRead(service, futures, 0); > dfsStripedInputStream.updateReadStats(r.getReadStats()); > DFSClient.LOG.debug("Read task returned: {}, for stripe {}", > r, alignedStripe); > StripingChunk returnedChunk = alignedStripe.chunks[r.index]; > Preconditions.checkNotNull(returnedChunk); > Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING); > if (r.state == StripingChunkReadResult.SUCCESSFUL) { > returnedChunk.state = StripingChunk.FETCHED; > alignedStripe.fetchedChunksNum++; > updateState4SuccessRead(r); > if (alignedStripe.fetchedChunksNum == dataBlkNum) { > clearFutures(); > break; > } > } else { > returnedChunk.state = StripingChunk.MISSING; > // close the corresponding reader > dfsStripedInputStream.closeReader(readerInfos[r.index]); > final int missing = alignedStripe.missingChunksNum; > alignedStripe.missingChunksNum++; > checkMissingBlocks(); > readDataForDecoding(); > readParityChunks(alignedStripe.missingChunksNum - missing); > } {code} > This error can be trigger by #StatefulStripeReader.decode. > The reason is that: > # If there are more than one *data block* read failed, the > #readDataForDecoding will be called multiple times; > # The *decodeInputs array* will be initialized repeatedly. > # The *parity* *data* in *decodeInputs array* which filled by > #readParityChunks previously will be set to null. > > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=756826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756826 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 14/Apr/22 03:40 Start Date: 14/Apr/22 03:40 Worklog Time Spent: 10m Work Description: liubingxing commented on PR #4167: URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1098678251 I will add a UT later Issue Time Tracking --- Worklog Id: (was: 756826) Time Spent: 40m (was: 0.5h) > EC decoding failed due to not enough valid inputs > -- > > Key: HDFS-16538 > URL: https://issues.apache.org/jira/browse/HDFS-16538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: qinyuren >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Currently, we found this error if the #StripeReader.readStripe() have more > than one block read failed. > We use the EC policy ec(6+3) in our cluster. > {code:java} > Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid > inputs are provided, not recoverable > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119) > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170) > at > org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462) > at > org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94) > at > org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > at java.base/java.io.DataInputStream.read(DataInputStream.java:149) > {code} > > > {code:java} > while (!futures.isEmpty()) { > try { > StripingChunkReadResult r = StripedBlockUtil > .getNextCompletedStripedRead(service, futures, 0); > dfsStripedInputStream.updateReadStats(r.getReadStats()); > DFSClient.LOG.debug("Read task returned: {}, for stripe {}", > r, alignedStripe); > StripingChunk returnedChunk = alignedStripe.chunks[r.index]; > Preconditions.checkNotNull(returnedChunk); > Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING); > if (r.state == StripingChunkReadResult.SUCCESSFUL) { > returnedChunk.state = StripingChunk.FETCHED; > alignedStripe.fetchedChunksNum++; > updateState4SuccessRead(r); > if (alignedStripe.fetchedChunksNum == dataBlkNum) { > clearFutures(); > break; > } > } else { > returnedChunk.state = StripingChunk.MISSING; > // close the corresponding reader > dfsStripedInputStream.closeReader(readerInfos[r.index]); > final int missing = alignedStripe.missingChunksNum; > alignedStripe.missingChunksNum++; > checkMissingBlocks(); > readDataForDecoding(); > readParityChunks(alignedStripe.missingChunksNum - missing); > } {code} > If there are two blocks read failed, the #readDataForDecoding() will be > called twice; > The *decodeInputs array* will be initialized twice, and the *parity* *data* > in decodeInputs array which filled by #readParityChunks will be set to null. > > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=756362&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756362 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 13/Apr/22 12:52 Start Date: 13/Apr/22 12:52 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4167: URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1098014110 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 1s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 55s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 1s | | trunk passed | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 50s | | the patch passed | | +1 :green_heart: | compile | 0m 53s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 0m 45s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 45s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 48s | | the patch passed | | +1 :green_heart: | javadoc | 0m 34s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 23s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 101m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4167 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 6a5ad419e32a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0507620c617b7868361a484773d3f74f0a1dd8dc | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/2/testReport/ | | Max. process+thread count | 548 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/2/console |
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=756317&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756317 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 13/Apr/22 10:55 Start Date: 13/Apr/22 10:55 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4167: URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1097910585 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 25s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 54s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 0s | | trunk passed | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | | the patch passed | | +1 :green_heart: | compile | 0m 54s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 0m 47s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 50s | | the patch passed | | +1 :green_heart: | javadoc | 0m 34s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 28s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 100m 29s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4167 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux f5521b8832b5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8846746dd03b9b54a7db1d7d79f2835eb1c6adb6 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/1/testReport/ | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/1/console |
[jira] [Work logged] (HDFS-16538) EC decoding failed due to not enough valid inputs
[ https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=756267&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756267 ] ASF GitHub Bot logged work on HDFS-16538: - Author: ASF GitHub Bot Created on: 13/Apr/22 09:13 Start Date: 13/Apr/22 09:13 Worklog Time Spent: 10m Work Description: liubingxing opened a new pull request, #4167: URL: https://github.com/apache/hadoop/pull/4167 We found this error if the #StripeReader.readStripe() have more than one block read failed. ``` Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid inputs are provided, not recoverable at org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119) at org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47) at org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86) at org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170) at org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462) at org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94) at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406) at org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327) at org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892) at java.base/java.io.DataInputStream.read(DataInputStream.java:149) at java.base/java.io.DataInputStream.read(DataInputStream.java:149) ``` ```java while (!futures.isEmpty()) { try { StripingChunkReadResult r = StripedBlockUtil .getNextCompletedStripedRead(service, futures, 0); dfsStripedInputStream.updateReadStats(r.getReadStats()); DFSClient.LOG.debug("Read task returned: {}, for stripe {}", r, alignedStripe); StripingChunk returnedChunk = alignedStripe.chunks[r.index]; Preconditions.checkNotNull(returnedChunk); Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING); if (r.state == StripingChunkReadResult.SUCCESSFUL) { returnedChunk.state = StripingChunk.FETCHED; alignedStripe.fetchedChunksNum++; updateState4SuccessRead(r); if (alignedStripe.fetchedChunksNum == dataBlkNum) { clearFutures(); break; } } else { returnedChunk.state = StripingChunk.MISSING; // close the corresponding reader dfsStripedInputStream.closeReader(readerInfos[r.index]); final int missing = alignedStripe.missingChunksNum; alignedStripe.missingChunksNum++; checkMissingBlocks(); readDataForDecoding(); readParityChunks(alignedStripe.missingChunksNum - missing); } ``` If there are two blocks read failed, the #readDataForDecoding() will be called twice; The **decodeInputs array** will be initialized twice, and the **parity data** in decodeInputs array which filled by #readParityChunks will be set to null. Issue Time Tracking --- Worklog Id: (was: 756267) Remaining Estimate: 0h Time Spent: 10m > EC decoding failed due to not enough valid inputs > -- > > Key: HDFS-16538 > URL: https://issues.apache.org/jira/browse/HDFS-16538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: qinyuren >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > We found this error if the #StripeReader.readStripe() have more than one > block read failed. > > {code:java} > Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid > inputs are provided, not recoverable > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119) > at > org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.(ByteBufferDecodingState.java:47) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86) > at > org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170) > at > org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462) > at > org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94) > at > org.apache.hadoop.hdfs.StripeReader.readStripe(Strip