[ 
https://issues.apache.org/jira/browse/BEAM-11282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17548950#comment-17548950
 ] 

Danny McCormick commented on BEAM-11282:
----------------------------------------

This issue has been migrated to https://github.com/apache/beam/issues/20560

> Cannot set compression level when writing compressed files
> ----------------------------------------------------------
>
>                 Key: BEAM-11282
>                 URL: https://issues.apache.org/jira/browse/BEAM-11282
>             Project: Beam
>          Issue Type: Improvement
>          Components: sdk-py-core
>    Affects Versions: 2.25.0
>            Reporter: Jack Whelpton
>            Priority: P3
>
> CompressedFile._initialize_compressor hardcodes the compression level used 
> when writing:
>  
> self._compressor = zlib.compressobj(
>           zlib.Z_DEFAULT_COMPRESSION, zlib.DEFLATED, self._gzip_mask)
>  
> It would be good to be able to control this, as I have a large set of GZIP 
> compressed files that are creating output 10x larger then the input size when 
> writing the same data back.
>  
> I've tried various monkeypatching approaches: these seem to work with the 
> local runner, but failed when using DataflowRunner. For example:
>  
> class WriteData(beam.PTransform):
>     def __init__(self, dst):
>         import zlib
>         self._dst = dst
>         def _initialize_compressor(self):
>             self._compressor = zlib.compressobj(
>                 zlib.Z_BEST_COMPRESSION, zlib.DEFLATED, self._gzip_mask
>             )
>         CompressedFile._initialize_compressor = _initialize_compressor
>     def expand(self, p):
>         return p | WriteToText(
>             file_path_prefix=self._dst,
>             file_name_suffix=".tsv.gz",
>             compression_type="gzip",
>         )



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to