Github user grantnicholas commented on the issue:

    https://github.com/apache/spark/pull/15726
  
    @viirya @yuananf a quick check of recent spark releases shows this fix is 
not in. Any suggested workarounds in the meantime for dynamic partition insert 
overwrites?
    
    It sounds like if the user does the logic of deleting the necessary 
partitions before running the dynamic insert overwrite query then hive will go 
down the "happy" performant path. This will require calculating the dynamic 
partitions before running the insert query, but if you can do that then this 
workaround will work right?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to