alamb commented on issue #8378:
URL: https://github.com/apache/arrow-rs/issues/8378#issuecomment-3320282315

   > I’d like to share a simple observation related to our discussion on **the 
score** for compression decisions. In my recent experiments, I noticed that 
once the Parquet file is sorted by a pruning column, compression becomes 
significantly less effective—so much so that my rewrite logic often decides to 
skip compression entirely.
   > 
   > By "pruning column," I’m referring to the column used for predicate 
filtering—typically something like a date field, rather than a key. A shallow 
example: when the data is sorted by date, my rewrite detects almost no size 
reduction across most columns, making compression unhelpful in this case.
   
   One thing I would be very interested to know (maybe a paper topic 🎣 ) is a 
systematic study of what tuning knobs in parquet exist and their effect on 
different types of data
   
   For example
   * Position in the sort order
   * Encoding used (dictionary, byte stream split, etc)
   * compression used
   * Dictionary page size limit
   * Data page size limit
   * ...
   
   Data properties
   * Data type (string vs float)
   * Cardinality (# distinct values)
   * Skew (are there a few values with a large number of samples? eg `NULL`, or 
`-l` or `""`)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to