Github user fjh100456 commented on the issue:

    https://github.com/apache/spark/pull/19218
  
    @gatorsmile 
    I'd test manully. When table-level compression not configured, it always 
take the session level compression, and ignore the existing file compression. 
Seems like a bug, however, table files with multiple compressions do not affect 
the reading and writing. 
    Is it ok to add a test to check reading and writing when there are multiple 
conpressions in the existing table files?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to