[ 
https://issues.apache.org/jira/browse/SPARK-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14160368#comment-14160368
 ] 

Ravindra Pesala commented on SPARK-3813:
----------------------------------------

The below code gives the exception.
{code}
    import sqlContext._
    val rdd = sc.parallelize((1 to 100).map(i => Record(i, s"val_$i")))
    rdd.registerTempTable("records")
    println("Result of SELECT *:")
    sql("SELECT case key when '93' then 'ravi' else key end FROM 
records").collect()
{code}

{code}
  java.lang.RuntimeException: [1.17] failure: ``UNION'' expected but identifier 
when found

SELECT case key when 93 then 0 else 1 end FROM records
                ^
        at scala.sys.package$.error(package.scala:27)
        at org.apache.spark.sql.catalyst.SqlParser.apply(SqlParser.scala:60)
        at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:74)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:267)
{code}


> Support "case when" conditional functions in Spark SQL
> ------------------------------------------------------
>
>                 Key: SPARK-3813
>                 URL: https://issues.apache.org/jira/browse/SPARK-3813
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Ravindra Pesala
>             Fix For: 1.2.0
>
>
> The SQL queries which has following conditional functions are not supported 
> in Spark SQL.
> {code}
> CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END
> CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END
> {code}
> The same functions can work in Spark HiveQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to