Github user HyukjinKwon commented on the pull request: https://github.com/apache/spark/pull/10805#issuecomment-173016289 Oh yes it does. Actually I am reading compressed files in the test I added [here](https://github.com/HyukjinKwon/spark/blob/SPARK-12420/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala#L361-L373). As you know it recognise the compression codec by file extension so if you meant manually setting compression codec for reading, it does not.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org