[ https://issues.apache.org/jira/browse/HDFS-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844510#comment-17844510 ]
ASF GitHub Bot commented on HDFS-17510: --------------------------------------- hadoop-yetus commented on PR #6798: URL: https://github.com/apache/hadoop/pull/6798#issuecomment-2099678537 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 00s | | No case conflicting files found. | | +0 :ok: | spotbugs | 0m 01s | | spotbugs executables are not available. | | +0 :ok: | codespell | 0m 01s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 01s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 00s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 00s | | The patch appears to include 1 new or modified test files. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 91m 16s | | trunk passed | | +1 :green_heart: | compile | 39m 15s | | trunk passed | | +1 :green_heart: | checkstyle | 4m 25s | | trunk passed | | -1 :x: | mvnsite | 4m 05s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6798/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | +1 :green_heart: | javadoc | 4m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 140m 29s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 01s | | the patch passed | | +1 :green_heart: | compile | 36m 12s | | the patch passed | | +1 :green_heart: | javac | 36m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 00s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 21s | | the patch passed | | -1 :x: | mvnsite | 4m 05s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6798/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | +1 :green_heart: | javadoc | 4m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 150m 55s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | asflicense | 5m 13s | | The patch does not generate ASF License warnings. | | | | 476m 38s | | | | Subsystem | Report/Notes | |----------:|:-------------| | GITHUB PR | https://github.com/apache/hadoop/pull/6798 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | MINGW64_NT-10.0-17763 3e6272437ff7 3.4.10-87d57229.x86_64 2024-02-14 20:17 UTC x86_64 Msys | | Build tool | maven | | Personality | /c/hadoop/dev-support/bin/hadoop.sh | | git revision | trunk / 88a3f9441aa4aa3cbea41b01be90b4a95881cef2 | | Default Java | Azul Systems, Inc.-1.8.0_332-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6798/1/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6798/1/console | | versions | git=2.44.0.windows.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Change of Codec configuration does not work > ------------------------------------------- > > Key: HDFS-17510 > URL: https://issues.apache.org/jira/browse/HDFS-17510 > Project: Hadoop HDFS > Issue Type: Bug > Components: compress > Reporter: Zhikai Hu > Priority: Minor > Labels: pull-request-available > > In one of my projects, I need to dynamically adjust compression level for > different files. > However, I found that in most cases the new compression level does not take > effect as expected, the old compression level continues to be used. > Here is the relevant code snippet: > ZStandardCodec zStandardCodec = new ZStandardCodec(); > zStandardCodec.setConf(conf); > conf.set("io.compression.codec.zstd.level", "5"); // level may change > dynamically > conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName()); > writer = SequenceFile.createWriter(conf, > SequenceFile.Writer.file(sequenceFilePath), > > SequenceFile.Writer.keyClass(LongWritable.class), > > SequenceFile.Writer.valueClass(BytesWritable.class), > > SequenceFile.Writer.compression(CompressionType.BLOCK)); > The reason is SequenceFile.Writer.init() method will call > CodecPool.getCompressor(codec, null) to get a compressor. > If the compressor is a reused instance, the conf is not applied because it is > passed as null: > public static Compressor getCompressor(CompressionCodec codec, Configuration > conf) { > Compressor compressor = borrow(compressorPool, codec.getCompressorType()); > if (compressor == null) > { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor > ["+codec.getDefaultExtension()+"]"); } > else { > compressor.reinit(conf); //conf is null here > ...... > > Please also refer to my unit test to reproduce the bug. > To address this bug, I modified the code to ensure that the configuration is > read back from the codec when a compressor is reused. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org