cloud-fan commented on a change in pull request #29324:
URL: https://github.com/apache/spark/pull/29324#discussion_r464257963



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -900,10 +889,41 @@ object JdbcUtils extends Logging {
       newTable: String,
       options: JDBCOptions): Unit = {
     val dialect = JdbcDialects.get(options.url)
+    executeStatement(conn, options, dialect.renameTable(oldTable, newTable))
+  }
+
+  /**
+   * Update a table from the JDBC database.
+   */
+  def alterTable(
+      conn: Connection,
+      tableName: String,
+      changes: Seq[TableChange],
+      options: JDBCOptions): Unit = {
+    val dialect = JdbcDialects.get(options.url)
+    conn.setAutoCommit(false)
     val statement = conn.createStatement
     try {
       statement.setQueryTimeout(options.queryTimeout)
-      statement.executeUpdate(dialect.renameTable(oldTable, newTable))
+      for (sql <- dialect.alterTable(tableName, changes)) {
+        statement.executeUpdate(sql)
+      }
+      conn.commit()
+    } catch {
+      case e: SQLException =>
+        if (conn != null) conn.rollback()

Review comment:
       I'm not sure if all JDBC servers support it. At least Spark thriftserver 
doesn't support it. How about we limit the scope to only support ALTER TABLE 
when `changes.length == 1`? Then we don't need to worry about atomic issues.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to