huaxingao commented on a change in pull request #30154:
URL: https://github.com/apache/spark/pull/30154#discussion_r516384753



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCTableCatalog.scala
##########
@@ -117,14 +117,33 @@ class JDBCTableCatalog extends TableCatalog with Logging {
     if (partitions.nonEmpty) {
       throw new UnsupportedOperationException("Cannot create JDBC table with 
partition")
     }
-    // TODO (SPARK-32405): Apply table options while creating tables in JDBC 
Table Catalog
+
+    var tableOptions = options.parameters + (JDBCOptions.JDBC_TABLE_NAME -> 
getTableName(ident))
+    var tableComment: String = ""
+    var tableProperties: String = ""
     if (!properties.isEmpty) {
-      logWarning("Cannot create JDBC table with properties, these properties 
will be " +
-        "ignored: " + properties.asScala.map { case (k, v) => s"$k=$v" 
}.mkString("[", ", ", "]"))
+      properties.asScala.map {
+        case (k, v) => k match {
+          case "comment" => tableComment = v
+          case "provider" | "owner" | "location" => // ignore provider, owner, 
and location
+          case _ => tableProperties = tableProperties + " " + s"$k=$v"
+        }
+      }
     }
 
-    val writeOptions = new JdbcOptionsInWrite(
-      options.parameters + (JDBCOptions.JDBC_TABLE_NAME -> 
getTableName(ident)))
+    if (tableComment != "") {
+      tableOptions = tableOptions + (JDBCOptions.JDBC_TABLE_COMMENT -> 
tableComment)
+    }
+    if (tableProperties != "") {
+      // table property is set in JDBC_CREATE_TABLE_OPTIONS, which will be 
appended
+      // to CREATE TABLE statement.
+      // E.g., "CREATE TABLE t (name string) ENGINE=InnoDB DEFAULT 
CHARSET=utf8"
+      // Spark doesn't check if these table properties are supported by 
databases. If
+      // table property is invalid, database will fail the table creation.

Review comment:
       Sorry, I need a little bit clarification on this :)
   When you say "let the dialect to decide", do you mean to let the database to 
decide if fail the CREATE TABLE or not (my current implementation)? Or you mean 
in each of the XXXDialect classes, I need to check if the table options are 
valid or not, and fail the CREATE TABLE on Spark side if options are invalid? I 
prefer the first one.

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -872,6 +873,15 @@ object JdbcUtils extends Logging {
     // E.g., "CREATE TABLE t (name string) ENGINE=InnoDB DEFAULT CHARSET=utf8"
     val sql = s"CREATE TABLE $tableName ($strSchema) $createTableOptions"
     executeStatement(conn, options, sql)
+    if (options.tableComment.nonEmpty) {
+      try {
+        executeStatement(
+          conn, options, dialect.getTableCommentQuery(tableName, 
options.tableComment))

Review comment:
       Yes, we can have a dialect API to create table. The benefit is that I 
can append table comment in CREATE TABLE for MySQL, but I think I need to 
change ```logWarning``` to public so I can log a Waring in the dialects if 
table comment is not supported.

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -872,6 +873,15 @@ object JdbcUtils extends Logging {
     // E.g., "CREATE TABLE t (name string) ENGINE=InnoDB DEFAULT CHARSET=utf8"
     val sql = s"CREATE TABLE $tableName ($strSchema) $createTableOptions"
     executeStatement(conn, options, sql)
+    if (options.tableComment.nonEmpty) {
+      try {
+        executeStatement(
+          conn, options, dialect.getTableCommentQuery(tableName, 
options.tableComment))

Review comment:
       Updated the code to have a dialect API to create table. Take a look to 
see which way you like. 
   I thought I can have one sql statement for MYSQL because it supports the 
syntax of appending comment after CREATE TABLE, but I think over, I should 
still use a separate sql statement for comment because I want to log a Warning 
if it fails.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to