Repository: spark
Updated Branches:
  refs/heads/branch-1.6 4dd8712c1 -> 865dd8bcc


[SPARK-12010][SQL] Spark JDBC requires support for column-name-free INSERT 
syntax

In the past Spark JDBC write only worked with technologies which support the 
following INSERT statement syntax (JdbcUtils.scala: insertStatement()):

INSERT INTO $table VALUES ( ?, ?, ..., ? )

But some technologies require a list of column names:

INSERT INTO $table ( $colNameList ) VALUES ( ?, ?, ..., ? )

This was blocking the use of e.g. the Progress JDBC Driver for Cassandra.

Another limitation is that syntax 1 relies no the dataframe field ordering 
match that of the target table. This works fine, as long as the target table 
has been created by writer.jdbc().

If the target table contains more columns (not created by writer.jdbc()), then 
the insert fails due mismatch of number of columns or their data types.

This PR switches to the recommended second INSERT syntax. Column names are 
taken from datafram field names.

Author: CK50 <christian.k...@oracle.com>

Closes #10380 from CK50/master-SPARK-12010-2.

(cherry picked from commit 502476e45c314a1229b3bce1c61f5cb94a9fc04b)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/865dd8bc
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/865dd8bc
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/865dd8bc

Branch: refs/heads/branch-1.6
Commit: 865dd8bccfc994310ad6664151d469043706ef3b
Parents: 4dd8712
Author: CK50 <christian.k...@oracle.com>
Authored: Thu Dec 24 13:39:11 2015 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Dec 24 13:41:35 2015 +0000

----------------------------------------------------------------------
 .../sql/execution/datasources/jdbc/JdbcUtils.scala      | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/865dd8bc/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
index 252f1cf..28cd688 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
@@ -63,14 +63,10 @@ object JdbcUtils extends Logging {
    * Returns a PreparedStatement that inserts a row into table via conn.
    */
   def insertStatement(conn: Connection, table: String, rddSchema: StructType): 
PreparedStatement = {
-    val sql = new StringBuilder(s"INSERT INTO $table VALUES (")
-    var fieldsLeft = rddSchema.fields.length
-    while (fieldsLeft > 0) {
-      sql.append("?")
-      if (fieldsLeft > 1) sql.append(", ") else sql.append(")")
-      fieldsLeft = fieldsLeft - 1
-    }
-    conn.prepareStatement(sql.toString())
+    val columns = rddSchema.fields.map(_.name).mkString(",")
+    val placeholders = rddSchema.fields.map(_ => "?").mkString(",")
+    val sql = s"INSERT INTO $table ($columns) VALUES ($placeholders)"
+    conn.prepareStatement(sql)
   }
 
   /**


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to