mengna-lin opened a new issue, #3459:
URL: https://github.com/apache/parquet-java/issues/3459
### Describe the enhancement requested
Summary
- Add support for configuring compression codec and compression level on a
per-column basis when writing Parquet files, rather than applying a single
codec and level uniformly across all columns.
Proposed API
Programmatic (ParquetWriter / ParquetProperties):
ParquetWriter.builder(...)
.withCompressionCodec(CompressionCodecName.SNAPPY) // global
default
.withCompressionCodec("col_a", CompressionCodecName.ZSTD) //
per-column override
.withCompressionLevel("col_a", 9)
.build();
MapReduce (ParquetOutputFormat / Hadoop Configuration):
parquet.compression=SNAPPY
parquet.compression#col_a=ZSTD
parquet.compression.level#col_a=9
Behavior
- Columns without an override inherit the global codec and level.
- A compression level set for a column that doesn't support levels (e.g.
SNAPPY) is silently ignored with a warning log.
- A compression level set without a per-column codec override applies the
level to the default codec, with a warning log.
### Component(s)
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]