mbutrovich commented on issue #7220:
URL: https://github.com/apache/arrow-rs/issues/7220#issuecomment-2710613301

   > That being said I would suggest we split this issue into two parts:
   > 
   > * Support influencing the precision used, similar to arrow-cpp
   > * Support legacy rebase modes for timestamps before 1900 written by Spark 
versions before 3.x - see 
[here](https://kontext.tech/article/1062/spark-2x-to-3x-date-timestamp-and-int96-rebase-modes)
   I'm good with this approach. As I mentioned, I wasn't sure how much 
Spark-specific logic we wanted to bring down to the Parquet reader level, but I 
can work with this. I might ask some followup questions about how to expose 
options that far into the Parquet reader since most of the API seems to be 
encoded with `Schema`. My guess is something in `ArrowReaderOptions` but I'll 
need to see how far through the call stack that actually makes it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to