[ 
https://issues.apache.org/jira/browse/SPARK-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-5855.
------------------------------
    Resolution: Won't Fix

> [Spark SQL] 'explain' command in SparkSQL don't support to analyze the DDL 
> 'VIEW' 
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-5855
>                 URL: https://issues.apache.org/jira/browse/SPARK-5855
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.2.0
>            Reporter: Yi Zhou
>            Priority: Minor
>
> 'explain' command in SparkSQL don't support to analyze the DDL 'VIEW'. For 
> example below in Spark-SQL CLI:
>  > explain
>          > CREATE VIEW q24_spark_RUN_QUERY_0_temp_competitor_price_view AS
>          > SELECT
>          >   i_item_sk, (imp_competitor_price - 
> i_current_price)/i_current_price AS price_change,
>          >   imp_start_date, (imp_end_date - imp_start_date) AS no_days
>          > FROM item i
>          > JOIN item_marketprices imp ON i.i_item_sk = imp.imp_item_sk
>          > WHERE i.i_item_sk IN (7, 17)
>          > AND imp.imp_competitor_price < i.i_current_price;
> 15/02/17 14:06:50 WARN HiveConf: DEPRECATED: Configuration property 
> hive.metastore.local no longer has any effect. Make sure to provide a valid 
> value for hive.metastore.uris if you are connecting to a remote metastore.
> 15/02/17 14:06:50 INFO ParseDriver: Parsing command: explain
> CREATE VIEW q24_spark_RUN_QUERY_0_temp_competitor_price_view AS
> SELECT
>   i_item_sk, (imp_competitor_price - i_current_price)/i_current_price AS 
> price_change,
>   imp_start_date, (imp_end_date - imp_start_date) AS no_days
> FROM item i
> JOIN item_marketprices imp ON i.i_item_sk = imp.imp_item_sk
> WHERE i.i_item_sk IN (7, 17)
> AND imp.imp_competitor_price < i.i_current_price
> 15/02/17 14:06:50 INFO ParseDriver: Parse Completed
> 15/02/17 14:06:50 INFO SparkContext: Starting job: collect at 
> SparkPlan.scala:84
> 15/02/17 14:06:50 INFO DAGScheduler: Got job 3 (collect at 
> SparkPlan.scala:84) with 1 output partitions (allowLocal=false)
> 15/02/17 14:06:50 INFO DAGScheduler: Final stage: Stage 3(collect at 
> SparkPlan.scala:84)
> 15/02/17 14:06:50 INFO DAGScheduler: Parents of final stage: List()
> 15/02/17 14:06:50 INFO DAGScheduler: Missing parents: List()
> 15/02/17 14:06:50 INFO DAGScheduler: Submitting Stage 3 (MappedRDD[12] at map 
> at SparkPlan.scala:84), which has no missing parents
> 15/02/17 14:06:50 INFO MemoryStore: ensureFreeSpace(2560) called with 
> curMem=4122, maxMem=370503843
> 15/02/17 14:06:50 INFO MemoryStore: Block broadcast_5 stored as values in 
> memory (estimated size 2.5 KB, free 353.3 MB)
> 15/02/17 14:06:50 INFO MemoryStore: ensureFreeSpace(1562) called with 
> curMem=6682, maxMem=370503843
> 15/02/17 14:06:50 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes 
> in memory (estimated size 1562.0 B, free 353.3 MB)
> 15/02/17 14:06:50 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory 
> on bignode1:56237 (size: 1562.0 B, free: 353.3 MB)
> 15/02/17 14:06:50 INFO BlockManagerMaster: Updated info of block 
> broadcast_5_piece0
> 15/02/17 14:06:50 INFO SparkContext: Created broadcast 5 from broadcast at 
> DAGScheduler.scala:838
> 15/02/17 14:06:50 INFO DAGScheduler: Submitting 1 missing tasks from Stage 3 
> (MappedRDD[12] at map at SparkPlan.scala:84)
> 15/02/17 14:06:50 INFO YarnClientClusterScheduler: Adding task set 3.0 with 1 
> tasks
> 15/02/17 14:06:50 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 3, 
> bignode2, PROCESS_LOCAL, 2425 bytes)
> 15/02/17 14:06:50 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory 
> on bignode2:51446 (size: 1562.0 B, free: 1766.5 MB)
> 15/02/17 14:06:51 INFO DAGScheduler: Stage 3 (collect at SparkPlan.scala:84) 
> finished in 0.147 s
> 15/02/17 14:06:51 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 3) 
> in 135 ms on bignode2 (1/1)
> 15/02/17 14:06:51 INFO DAGScheduler: Job 3 finished: collect at 
> SparkPlan.scala:84, took 0.164711 s
> 15/02/17 14:06:51 INFO YarnClientClusterScheduler: Removed TaskSet 3.0, whose 
> tasks have all completed, from pool
> == Physical Plan ==
> PhysicalRDD [], ParallelCollectionRDD[4] at parallelize at 
> SparkStrategies.scala:195
> Time taken: 0.292 seconds
> 15/02/17 14:06:51 INFO CliDriver: Time taken: 0.292 seconds  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to