[ 
https://issues.apache.org/jira/browse/SPARK-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143540#comment-14143540
 ] 

Michael Armbrust commented on SPARK-3641:
-----------------------------------------

The idea here is to be able to support more than one SQL context, so I think we 
will always need populate this field before constructing physical operators.  
To avoid bugs like this, it would be good to limit the number of places where 
physical plans are constructed.  Right now its kind of a hack that we use 
SparkLogicalPlan as a connector and manually create the physical ExistingRDD 
operator.  If we instead had a true logical concept for ExistingRDDs then this 
bug would not have occurred....

> Correctly populate SparkPlan.currentContext
> -------------------------------------------
>
>                 Key: SPARK-3641
>                 URL: https://issues.apache.org/jira/browse/SPARK-3641
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Yin Huai
>            Priority: Critical
>
> After creating a new SQLContext, we need to populate SparkPlan.currentContext 
> before we create any SparkPlan. Right now, only SQLContext.createSchemaRDD 
> populate SparkPlan.currentContext. SQLContext.applySchema is missing this 
> call and we can have NPE as described in 
> http://qnalist.com/questions/5162981/spark-sql-1-1-0-npe-when-join-two-cached-table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to