[jira] [Commented] (SPARK-12437) Reserved words (like table) throws error when writing a data frame to JDBC
[ https://issues.apache.org/jira/browse/SPARK-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385148#comment-15385148 ] Sean Zhong commented on SPARK-12437: This issue is fixed by SPARK-16387. The column names are quoted automatically. The jdbc table name can be quoted manually by the end user like this: spark.table("good2").write.jdbc(url, "`table`") The following example can run successfully: {code} // Here we have a special column name "table" Seq((1,2),(2,3)).toDF("table", "b").createOrReplaceTempView("good2") val connectionProperties = new java.util.Properties() val url = "jdbc:mysql://127.0.0.1:3306/test?password=xx=root=false" // Here we have a special table name "table" spark.table("good2").write.jdbc(url, "`table`", connectionProperties) {code} > Reserved words (like table) throws error when writing a data frame to JDBC > -- > > Key: SPARK-12437 > URL: https://issues.apache.org/jira/browse/SPARK-12437 > Project: Spark > Issue Type: Bug > Components: SQL >Reporter: Reynold Xin >Assignee: Apache Spark > Labels: starter > > From: A Spark user > If you have a DataFrame column name that contains a SQL reserved word, it > will not write to a JDBC source. This is somewhat similar to an error found > in the redshift adapter: > https://github.com/databricks/spark-redshift/issues/80 > I have produced this on a MySQL (AWS Aurora) database > Steps to reproduce: > {code} > val connectionProperties = new java.util.Properties() > sqlContext.table("diamonds").write.jdbc(jdbcUrl, "diamonds", > connectionProperties) > {code} > Exception: > {code} > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error > in your SQL syntax; check the manual that corresponds to your MySQL server > version for the right syntax to use near 'table DOUBLE PRECISION , price > INTEGER , x DOUBLE PRECISION , y DOUBLE PRECISION' at line 1 > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) > at com.mysql.jdbc.Util.getInstance(Util.java:386) > at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169) > at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617) > at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778) > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825) > at > com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360) > at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:275) > {code} > You can workaround this by renaming the column on the dataframe before > writing, but ideally we should be able to do something like encapsulate the > name in quotes which is allowed. Example: > {code} > CREATE TABLE `test_table_column` ( > `id` int(11) DEFAULT NULL, > `table` varchar(100) DEFAULT NULL > ) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-12437) Reserved words (like table) throws error when writing a data frame to JDBC
[ https://issues.apache.org/jira/browse/SPARK-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15065295#comment-15065295 ] Thomas Sebastian commented on SPARK-12437: -- I looked at the redshift similar problem; and I assume a possible solution could be to put every columns in quotes before writing it.Let me know your thoughts. I could work on a fix. > Reserved words (like table) throws error when writing a data frame to JDBC > -- > > Key: SPARK-12437 > URL: https://issues.apache.org/jira/browse/SPARK-12437 > Project: Spark > Issue Type: Bug > Components: SQL >Reporter: Reynold Xin > Labels: starter > > From: A Spark user > If you have a DataFrame column name that contains a SQL reserved word, it > will not write to a JDBC source. This is somewhat similar to an error found > in the redshift adapter: > https://github.com/databricks/spark-redshift/issues/80 > I have produced this on a MySQL (AWS Aurora) database > Steps to reproduce: > {code} > val connectionProperties = new java.util.Properties() > sqlContext.table("diamonds").write.jdbc(jdbcUrl, "diamonds", > connectionProperties) > {code} > Exception: > {code} > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error > in your SQL syntax; check the manual that corresponds to your MySQL server > version for the right syntax to use near 'table DOUBLE PRECISION , price > INTEGER , x DOUBLE PRECISION , y DOUBLE PRECISION' at line 1 > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) > at com.mysql.jdbc.Util.getInstance(Util.java:386) > at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169) > at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617) > at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778) > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825) > at > com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360) > at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:275) > {code} > You can workaround this by renaming the column on the dataframe before > writing, but ideally we should be able to do something like encapsulate the > name in quotes which is allowed. Example: > {code} > CREATE TABLE `test_table_column` ( > `id` int(11) DEFAULT NULL, > `table` varchar(100) DEFAULT NULL > ) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-12437) Reserved words (like table) throws error when writing a data frame to JDBC
[ https://issues.apache.org/jira/browse/SPARK-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15065297#comment-15065297 ] Reynold Xin commented on SPARK-12437: - That's pretty much it. A fix would be great, [~thomastechs]. > Reserved words (like table) throws error when writing a data frame to JDBC > -- > > Key: SPARK-12437 > URL: https://issues.apache.org/jira/browse/SPARK-12437 > Project: Spark > Issue Type: Bug > Components: SQL >Reporter: Reynold Xin > Labels: starter > > From: A Spark user > If you have a DataFrame column name that contains a SQL reserved word, it > will not write to a JDBC source. This is somewhat similar to an error found > in the redshift adapter: > https://github.com/databricks/spark-redshift/issues/80 > I have produced this on a MySQL (AWS Aurora) database > Steps to reproduce: > {code} > val connectionProperties = new java.util.Properties() > sqlContext.table("diamonds").write.jdbc(jdbcUrl, "diamonds", > connectionProperties) > {code} > Exception: > {code} > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error > in your SQL syntax; check the manual that corresponds to your MySQL server > version for the right syntax to use near 'table DOUBLE PRECISION , price > INTEGER , x DOUBLE PRECISION , y DOUBLE PRECISION' at line 1 > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) > at com.mysql.jdbc.Util.getInstance(Util.java:386) > at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237) > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169) > at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617) > at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778) > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825) > at > com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2459) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376) > at > com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360) > at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:275) > {code} > You can workaround this by renaming the column on the dataframe before > writing, but ideally we should be able to do something like encapsulate the > name in quotes which is allowed. Example: > {code} > CREATE TABLE `test_table_column` ( > `id` int(11) DEFAULT NULL, > `table` varchar(100) DEFAULT NULL > ) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org