[ 
https://issues.apache.org/jira/browse/SPARK-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147371#comment-15147371
 ] 

Grzegorz Chilkiewicz commented on SPARK-13297:
----------------------------------------------

I've verified it on: 
http://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest/spark-2.0.0-SNAPSHOT-bin-hadoop2.6.tgz
You are right, it looks like the problem is fixed there!

But still - it is not fixed in branch 1.6
I've found that this commit:
https://github.com/apache/spark/commit/7cd7f2202547224593517b392f56e49e4c94cabc
fixed issue in master branch.

Shouldn't we cherry-pick that commit? (it's big - it could be hard...)

> [SQL] Backticks cannot be escaped in column names
> -------------------------------------------------
>
>                 Key: SPARK-13297
>                 URL: https://issues.apache.org/jira/browse/SPARK-13297
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.0
>            Reporter: Grzegorz Chilkiewicz
>            Priority: Minor
>
> We want to use backticks to escape spaces & minus signs in column names.
> Are we unable to escape backticks when a column name is surrounded by 
> backticks?
> It is not documented in: 
> http://spark.apache.org/docs/latest/sql-programming-guide.html
> In MySQL there is a way: double the backticks, but this trick doesn't work in 
> Spark-SQL.
> Am I correct or just missing something? Is there a way to escape backticks 
> inside a column name when it is surrounded by backticks?
> Code to reproduce the problem:
> https://github.com/grzegorz-chilkiewicz/SparkSqlEscapeBacktick



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to