[ 
https://issues.apache.org/jira/browse/SPARK-37217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-37217:
------------------------------------

    Assignee:     (was: Apache Spark)

> Dynamic partitions should fail quickly when writing to external tables to 
> prevent data deletion
> -----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-37217
>                 URL: https://issues.apache.org/jira/browse/SPARK-37217
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.2.0
>            Reporter: dzcxzl
>            Priority: Trivial
>
> [SPARK-29295|https://issues.apache.org/jira/browse/SPARK-29295] introduces a 
> mechanism that writes to external tables is a dynamic partition method, and 
> the data in the target partition will be deleted first.
> Assuming that 1001 partitions are written, the data of 10001 partitions will 
> be deleted first, but because hive.exec.max.dynamic.partitions is 1000 by 
> default, loadDynamicPartitions will fail at this time, but the data of 1001 
> partitions has been deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to