Github user fjh100456 commented on the issue:

    https://github.com/apache/spark/pull/19218
  
    Thanks for your review. @gatorsmile 
    In the first question I mean that ‘parquet.compression’ can be found in 
the `table: Tabledesc` (maybe similar with `catalogtable`), and can also be 
found in `sparkSession.sessionState.conf`(set by user through the command `set 
parquet.compression=xxx`), which one should take precedence?
    
    This issue was originally only related to hive table writing,  but after 
fix the priority, it was found that non-partitioned tables did not take the 
right precedence, and non-partitioned tables writing will not enter 
`InsertIntoHiveTable`. `Insertintohadoopfsrelationcommand.scala` is really not 
a proper place, is there any place that can solve both of partitioned tables 
and non-partitioned tables?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to