Github user danielvdende commented on a diff in the pull request: https://github.com/apache/spark/pull/20057#discussion_r159183234 --- Diff: docs/sql-programming-guide.md --- @@ -1339,6 +1339,13 @@ the following case-insensitive options: This is a JDBC writer related option. When <code>SaveMode.Overwrite</code> is enabled, this option causes Spark to truncate an existing table instead of dropping and recreating it. This can be more efficient, and prevents the table metadata (e.g., indices) from being removed. However, it will not work in some cases, such as when the new data has a different schema. It defaults to <code>false</code>. This option applies only to writing. </td> </tr> + + <tr> + <td><code>cascadeTruncate</code></td> + <td> + This is a JDBC writer related option. If enabled and supported by the JDBC database (PostgreSQL and Oracle at the moment), this options allows execution of a <code>TRUNCATE TABLE t CASCADE</code>. This will affect other tables, and thus should be used with case. This option applies only to writing. --- End diff -- I think it raises the question of how complete/incomplete the Spark JDBC API should be, and what the use cases are that it should serve. For the most simple cases, in which no key constraints are set between tables, you won't need this option. However, as soon as foreign key constraints are introduced, it is very important. I agree that not every functionality from SQL (dialects) should be included, but I personally feel this is quite fundamental functionality. Moreover, as it's configuration option users that don't want it also don't have to use it. I think we also discussed this functionality in previous PR with @dongjoon-hyun here: [SPARK-22729](https://github.com/apache/spark/pull/19911)
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org