[jira] [Commented] (SPARK-22954) ANALYZE TABLE fails with NoSuchTableException for temporary tables (but should have reported "not supported on views")

2018-01-07 Thread Suchith J N (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315336#comment-16315336
 ] 

Suchith J N commented on SPARK-22954:
-

I have opened a pull request. 

> ANALYZE TABLE fails with NoSuchTableException for temporary tables (but 
> should have reported "not supported on views")
> --
>
> Key: SPARK-22954
> URL: https://issues.apache.org/jira/browse/SPARK-22954
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: {code}
> $ ./bin/spark-shell --version
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
>   /_/
> Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_152
> Branch master
> Compiled by user jacek on 2018-01-04T05:44:05Z
> Revision 7d045c5f00e2c7c67011830e2169a4e130c3ace8
> {code}
>Reporter: Jacek Laskowski
>Priority: Minor
>
> {{ANALYZE TABLE}} fails with {{NoSuchTableException: Table or view 'names' 
> not found in database 'default';}} for temporary tables (views) while the 
> reason is that it can only work with permanent tables (which [it can 
> report|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala#L38]
>  if it had a chance).
> {code}
> scala> names.createOrReplaceTempView("names")
> scala> sql("ANALYZE TABLE names COMPUTE STATISTICS")
> org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 
> 'names' not found in database 'default';
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:181)
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:398)
>   at 
> org.apache.spark.sql.execution.command.AnalyzeTableCommand.run(AnalyzeTableCommand.scala:36)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3244)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3243)
>   at org.apache.spark.sql.Dataset.(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:72)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
>   ... 50 elided
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22954) ANALYZE TABLE fails with NoSuchTableException for temporary tables (but should have reported "not supported on views")

2018-01-07 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315334#comment-16315334
 ] 

Apache Spark commented on SPARK-22954:
--

User 'suchithjn225' has created a pull request for this issue:
https://github.com/apache/spark/pull/20177

> ANALYZE TABLE fails with NoSuchTableException for temporary tables (but 
> should have reported "not supported on views")
> --
>
> Key: SPARK-22954
> URL: https://issues.apache.org/jira/browse/SPARK-22954
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: {code}
> $ ./bin/spark-shell --version
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
>   /_/
> Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_152
> Branch master
> Compiled by user jacek on 2018-01-04T05:44:05Z
> Revision 7d045c5f00e2c7c67011830e2169a4e130c3ace8
> {code}
>Reporter: Jacek Laskowski
>Priority: Minor
>
> {{ANALYZE TABLE}} fails with {{NoSuchTableException: Table or view 'names' 
> not found in database 'default';}} for temporary tables (views) while the 
> reason is that it can only work with permanent tables (which [it can 
> report|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala#L38]
>  if it had a chance).
> {code}
> scala> names.createOrReplaceTempView("names")
> scala> sql("ANALYZE TABLE names COMPUTE STATISTICS")
> org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 
> 'names' not found in database 'default';
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:181)
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:398)
>   at 
> org.apache.spark.sql.execution.command.AnalyzeTableCommand.run(AnalyzeTableCommand.scala:36)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3244)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3243)
>   at org.apache.spark.sql.Dataset.(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:72)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
>   ... 50 elided
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22954) ANALYZE TABLE fails with NoSuchTableException for temporary tables (but should have reported "not supported on views")

2018-01-05 Thread Suchith J N (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313956#comment-16313956
 ] 

Suchith J N commented on SPARK-22954:
-

I found another method in org.apache.spark.sql.catalyst.catalog.SessionCatalog 
- *getTempViewOrPermanentTableMetadata()*.

> ANALYZE TABLE fails with NoSuchTableException for temporary tables (but 
> should have reported "not supported on views")
> --
>
> Key: SPARK-22954
> URL: https://issues.apache.org/jira/browse/SPARK-22954
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: {code}
> $ ./bin/spark-shell --version
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
>   /_/
> Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_152
> Branch master
> Compiled by user jacek on 2018-01-04T05:44:05Z
> Revision 7d045c5f00e2c7c67011830e2169a4e130c3ace8
> {code}
>Reporter: Jacek Laskowski
>Priority: Minor
>
> {{ANALYZE TABLE}} fails with {{NoSuchTableException: Table or view 'names' 
> not found in database 'default';}} for temporary tables (views) while the 
> reason is that it can only work with permanent tables (which [it can 
> report|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala#L38]
>  if it had a chance).
> {code}
> scala> names.createOrReplaceTempView("names")
> scala> sql("ANALYZE TABLE names COMPUTE STATISTICS")
> org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 
> 'names' not found in database 'default';
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:181)
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:398)
>   at 
> org.apache.spark.sql.execution.command.AnalyzeTableCommand.run(AnalyzeTableCommand.scala:36)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3244)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3243)
>   at org.apache.spark.sql.Dataset.(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:72)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
>   ... 50 elided
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22954) ANALYZE TABLE fails with NoSuchTableException for temporary tables (but should have reported "not supported on views")

2018-01-05 Thread Suchith J N (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313915#comment-16313915
 ] 

Suchith J N commented on SPARK-22954:
-

I run the commands you mentioned. Actually, there are two catalogs and they are 
different. Try these out.

{code:java}
scala > names.sparkSession.catalog.tableExists("names")
res1: Boolean = true

scala> names.sparkSession.sessionState.catalog.tableExists 
(TableIdentifier("names"))
res2: Boolean = false
{code}

According to the stack trace, spark is looking up the sessionState.catalog. 
Actually, it is stored in sparkSession.catalog.

> ANALYZE TABLE fails with NoSuchTableException for temporary tables (but 
> should have reported "not supported on views")
> --
>
> Key: SPARK-22954
> URL: https://issues.apache.org/jira/browse/SPARK-22954
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: {code}
> $ ./bin/spark-shell --version
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
>   /_/
> Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_152
> Branch master
> Compiled by user jacek on 2018-01-04T05:44:05Z
> Revision 7d045c5f00e2c7c67011830e2169a4e130c3ace8
> {code}
>Reporter: Jacek Laskowski
>Priority: Minor
>
> {{ANALYZE TABLE}} fails with {{NoSuchTableException: Table or view 'names' 
> not found in database 'default';}} for temporary tables (views) while the 
> reason is that it can only work with permanent tables (which [it can 
> report|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala#L38]
>  if it had a chance).
> {code}
> scala> names.createOrReplaceTempView("names")
> scala> sql("ANALYZE TABLE names COMPUTE STATISTICS")
> org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 
> 'names' not found in database 'default';
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:181)
>   at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:398)
>   at 
> org.apache.spark.sql.execution.command.AnalyzeTableCommand.run(AnalyzeTableCommand.scala:36)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$$anonfun$51.apply(Dataset.scala:3244)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3243)
>   at org.apache.spark.sql.Dataset.(Dataset.scala:187)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:72)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
>   ... 50 elided
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org