[ 
https://issues.apache.org/jira/browse/SPARK-21154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Zhang updated SPARK-21154:
--------------------------------
    Description: 
When creating View from another existing View in Spark SQL, we will see 
ParseException if existing View.

Here is the detail on how to reproduce it:
*Hive* (I'm using 1.1.0):
hive> *CREATE TABLE my_table (id int, name string);*
OK
Time taken: 0.107 seconds
hive> *CREATE VIEW my_view(view_id,view_name) AS SELECT * FROM my_table;*
OK
Time taken: 0.075 seconds
# View Information
View Original Text:     SELECT * FROM my_table
View Expanded Text:     SELECT `id` AS `view_id`, `name` AS `view_name` FROM 
(SELECT `my_table`.`id`, `my_table`.`name` FROM `default`.`my_table`) 
`default.my_view`
Time taken: 0.04 seconds, Fetched: 28 row(s)

*Spark* (Same behavior for spark 2.1.0 and 2.1.1):
scala> *sqlContext.sql("CREATE VIEW my_view_spark AS SELECT * FROM my_view");*
java.lang.RuntimeException: Failed to analyze the canonicalized SQL: SELECT 
`gen_attr_0` AS `view_id`, `gen_attr_1` AS `view_name` FROM (SELECT 
`gen_attr_0`, `gen_attr_1` FROM (SELECT `gen_attr_2` AS `gen_attr_0`, 
`gen_attr_3` AS `gen_attr_1` FROM (SELECT `gen_attr_2`, `gen_attr_3` FROM 
(SELECT `id` AS `gen_attr_2`, `name` AS `gen_attr_3` FROM `default`.`my_table`) 
AS gen_subquery_0) AS default.my_view) AS my_view) AS my_view
  at 
org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:222)
  at 
org.apache.spark.sql.execution.command.CreateViewCommand.run(views.scala:176)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
  at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
  ... 74 elided
Caused by: org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input 'FROM' expecting {<EOF>, 'WHERE', 'GROUP', 'ORDER', 'HAVING', 
'LIMIT', 'LATERAL', 'WINDOW', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'SORT', 
'CLUSTER', 'DISTRIBUTE'}(line 1, pos 62)

== SQL ==
SELECT `gen_attr_0` AS `view_id`, `gen_attr_1` AS `view_name` FROM (SELECT 
`gen_attr_0`, `gen_attr_1` FROM (SELECT `gen_attr_2` AS `gen_attr_0`, 
`gen_attr_3` AS `gen_attr_1` FROM (SELECT `gen_attr_2`, `gen_attr_3` FROM 
(SELECT `id` AS `gen_attr_2`, `name` AS `gen_attr_3` FROM `default`.`my_table`) 
AS gen_subquery_0) AS default.my_view) AS my_view) AS my_view
--------------------------------------------------------------^^^

  at 
org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
  at 
org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
  at 
org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:219)
  ... 90 more



  was:
When creating View from another existing View in Spark SQL, we will see 
ParseException if existing View.

Here is the detail on how to reproduce it:
*Hive* (I'm using 1.1.0):
hive> *CREATE TABLE my_table (id int, name string);*
OK
Time taken: 0.107 seconds
hive> *CREATE VIEW my_view(view_id,view_name) AS SELECT * FROM my_table;*
OK
Time taken: 0.075 seconds
# View Information
View Original Text:     SELECT * FROM my_table
View Expanded Text:     SELECT `id` AS `view_id`, `name` AS `view_name` FROM 
(SELECT `my_table`.`id`, `my_table`.`name` FROM `default`.`my_table`) 
`default.my_view`
Time taken: 0.04 seconds, Fetched: 28 row(s)

*Spark* (Same behavior for spark 2.1.0 and 2.1.1):
scala> sqlContext.sql("CREATE VIEW my_view_spark AS SELECT * FROM my_view");
java.lang.RuntimeException: Failed to analyze the canonicalized SQL: SELECT 
`gen_attr_0` AS `view_id`, `gen_attr_1` AS `view_name` FROM (SELECT 
`gen_attr_0`, `gen_attr_1` FROM (SELECT `gen_attr_2` AS `gen_attr_0`, 
`gen_attr_3` AS `gen_attr_1` FROM (SELECT `gen_attr_2`, `gen_attr_3` FROM 
(SELECT `id` AS `gen_attr_2`, `name` AS `gen_attr_3` FROM `default`.`my_table`) 
AS gen_subquery_0) AS default.my_view) AS my_view) AS my_view
  at 
org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:222)
  at 
org.apache.spark.sql.execution.command.CreateViewCommand.run(views.scala:176)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
  at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
  ... 74 elided
Caused by: org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input 'FROM' expecting {<EOF>, 'WHERE', 'GROUP', 'ORDER', 'HAVING', 
'LIMIT', 'LATERAL', 'WINDOW', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'SORT', 
'CLUSTER', 'DISTRIBUTE'}(line 1, pos 62)

== SQL ==
SELECT `gen_attr_0` AS `view_id`, `gen_attr_1` AS `view_name` FROM (SELECT 
`gen_attr_0`, `gen_attr_1` FROM (SELECT `gen_attr_2` AS `gen_attr_0`, 
`gen_attr_3` AS `gen_attr_1` FROM (SELECT `gen_attr_2`, `gen_attr_3` FROM 
(SELECT `id` AS `gen_attr_2`, `name` AS `gen_attr_3` FROM `default`.`my_table`) 
AS gen_subquery_0) AS default.my_view) AS my_view) AS my_view
--------------------------------------------------------------^^^

  at 
org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
  at 
org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
  at 
org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:219)
  ... 90 more




> ParseException when Create View from another View in Spark SQL 
> ---------------------------------------------------------------
>
>                 Key: SPARK-21154
>                 URL: https://issues.apache.org/jira/browse/SPARK-21154
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0, 2.1.1
>            Reporter: Brian Zhang
>
> When creating View from another existing View in Spark SQL, we will see 
> ParseException if existing View.
> Here is the detail on how to reproduce it:
> *Hive* (I'm using 1.1.0):
> hive> *CREATE TABLE my_table (id int, name string);*
> OK
> Time taken: 0.107 seconds
> hive> *CREATE VIEW my_view(view_id,view_name) AS SELECT * FROM my_table;*
> OK
> Time taken: 0.075 seconds
> # View Information
> View Original Text:     SELECT * FROM my_table
> View Expanded Text:     SELECT `id` AS `view_id`, `name` AS `view_name` FROM 
> (SELECT `my_table`.`id`, `my_table`.`name` FROM `default`.`my_table`) 
> `default.my_view`
> Time taken: 0.04 seconds, Fetched: 28 row(s)
> *Spark* (Same behavior for spark 2.1.0 and 2.1.1):
> scala> *sqlContext.sql("CREATE VIEW my_view_spark AS SELECT * FROM my_view");*
> java.lang.RuntimeException: Failed to analyze the canonicalized SQL: SELECT 
> `gen_attr_0` AS `view_id`, `gen_attr_1` AS `view_name` FROM (SELECT 
> `gen_attr_0`, `gen_attr_1` FROM (SELECT `gen_attr_2` AS `gen_attr_0`, 
> `gen_attr_3` AS `gen_attr_1` FROM (SELECT `gen_attr_2`, `gen_attr_3` FROM 
> (SELECT `id` AS `gen_attr_2`, `name` AS `gen_attr_3` FROM 
> `default`.`my_table`) AS gen_subquery_0) AS default.my_view) AS my_view) AS 
> my_view
>   at 
> org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:222)
>   at 
> org.apache.spark.sql.execution.command.CreateViewCommand.run(views.scala:176)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
>   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
>   at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
>   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
>   ... 74 elided
> Caused by: org.apache.spark.sql.catalyst.parser.ParseException:
> mismatched input 'FROM' expecting {<EOF>, 'WHERE', 'GROUP', 'ORDER', 
> 'HAVING', 'LIMIT', 'LATERAL', 'WINDOW', 'UNION', 'EXCEPT', 'MINUS', 
> 'INTERSECT', 'SORT', 'CLUSTER', 'DISTRIBUTE'}(line 1, pos 62)
> == SQL ==
> SELECT `gen_attr_0` AS `view_id`, `gen_attr_1` AS `view_name` FROM (SELECT 
> `gen_attr_0`, `gen_attr_1` FROM (SELECT `gen_attr_2` AS `gen_attr_0`, 
> `gen_attr_3` AS `gen_attr_1` FROM (SELECT `gen_attr_2`, `gen_attr_3` FROM 
> (SELECT `id` AS `gen_attr_2`, `name` AS `gen_attr_3` FROM 
> `default`.`my_table`) AS gen_subquery_0) AS default.my_view) AS my_view) AS 
> my_view
> --------------------------------------------------------------^^^
>   at 
> org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197)
>   at 
> org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99)
>   at 
> org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
>   at 
> org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>   at 
> org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:219)
>   ... 90 more



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to