[ 
https://issues.apache.org/jira/browse/SPARK-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14601572#comment-14601572
 ] 

David Sabater commented on SPARK-8616:
--------------------------------------

Similar issue exhibited while loading from CSV files on where header columns 
can have a space character with Spark 1.3.2.

The dataframe generated from the CSV file keeps the right schema definition 
(With the corresponding space character), but when this dataframe is 
transformed and saved (I.e. Create parquet file) the parquet file generated 
contains nulls on column where the name has space characters.

https://github.com/databricks/spark-csv package is being used to parse the CSV 
but as I said the resulting input dataframe schema is correct.


> SQLContext doesn't handle tricky column names when loading from JDBC
> --------------------------------------------------------------------
>
>                 Key: SPARK-8616
>                 URL: https://issues.apache.org/jira/browse/SPARK-8616
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.0
>         Environment: Ubuntu 14.04, Sqlite 3.8.7, Spark 1.4.0
>            Reporter: Gergely Svigruha
>
> Reproduce:
>  - create a table in a relational database (in my case sqlite) with a column 
> name containing a space:
>  CREATE TABLE my_table (id INTEGER, "tricky column" TEXT);
>  - try to create a DataFrame using that table:
> sqlContext.read.format("jdbc").options(Map(
>   "url" -> "jdbs:sqlite:...",
>   "dbtable" -> "my_table")).load()
> java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such 
> column: tricky)
> According to the SQL spec this should be valid:
> http://savage.net.au/SQL/sql-99.bnf.html#delimited%20identifier



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to