[ https://issues.apache.org/jira/browse/HDFS-16520?focusedWorklogId=755156&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-755156 ]
ASF GitHub Bot logged work on HDFS-16520: ----------------------------------------- Author: ASF GitHub Bot Created on: 11/Apr/22 09:54 Start Date: 11/Apr/22 09:54 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4104: URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1094839139 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | |||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 13s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 18s | | trunk passed | | +1 :green_heart: | compile | 6m 1s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 5m 45s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 25s | | trunk passed | | +1 :green_heart: | javadoc | 1m 52s | | trunk passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 20s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 1s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 7s | | the patch passed | | +1 :green_heart: | compile | 5m 54s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 54s | | the patch passed | | +1 :green_heart: | compile | 5m 33s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 5m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 6s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 1 new + 29 unchanged - 0 fixed = 30 total (was 29) | | +1 :green_heart: | mvnsite | 2m 11s | | the patch passed | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 57s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 26s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 229m 20s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 370m 31s | | | | Reason | Tests | |-------:|:------| | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4104 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 9f7e91d244d9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d7c5bd02a8c6b3bec5a5a950e68cb09d0aa71ed5 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/testReport/ | | Max. process+thread count | 3121 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. Issue Time Tracking ------------------- Worklog Id: (was: 755156) Time Spent: 1.5h (was: 1h 20m) > Improve EC pread: avoid potential reading whole block > ----------------------------------------------------- > > Key: HDFS-16520 > URL: https://issues.apache.org/jira/browse/HDFS-16520 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient, ec > Affects Versions: 3.3.1, 3.3.2 > Reporter: daimin > Assignee: daimin > Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > HDFS client 'pread' represents 'position read', this kind of read just need a > range of data instead of reading the whole file/block. By using > BlockReaderFactory#setLength, client tells datanode the block length to be > read from disk and sent to client. > To EC file, the block length to read is not well set, by default using > 'block.getBlockSize() - offsetInBlock' to both pread and sread. Thus datanode > read much more data and send to client, and abort when client closes > connection. There is a lot waste of resource to this situation. -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org