Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

2017-02-26 Thread Liang-Chi Hsieh
Hi Stan,

Looks like it is the same issue we are working to solve. Related PRs are:

https://github.com/apache/spark/pull/16998
https://github.com/apache/spark/pull/16785

You can take a look of those PRs and help review too. Thanks. 


StanZhai wrote
> Thanks for Cheng's help.
> 
> 
> It must be something wrong with InferFiltersFromConstraints, I just
> removed InferFiltersFromConstraints from
> org/apache/spark/sql/catalyst/optimizer/Optimizer.scala to avoid this
> issue. I will analysis this issue with the method you provided.
> 
> 
> 
> 
> -- Original --
> From:  "Cheng Lian [via Apache Spark Developers
> List]";ml-node+s1001551n21069...@n3.nabble.com;
> Send time: Friday, Feb 24, 2017 2:28 AM
> To: "Stan Zhai"m...@zhaishidan.cn; 
> 
> Subject:  Re: The driver hangs at DataFrame.rdd in Spark 2.1.0
> 
> 
> 
>  
> This one seems to be relevant, but it's already fixed in 2.1.0.
>  
> One way to debug is to turn on trace log and check how the  
> analyzer/optimizer behaves.
>  
>  
>  On 2/22/17 11:11 PM, StanZhai wrote:
>  
> Could this be related to
> https://issues.apache.org/jira/browse/SPARK-17733 ?
> 
>  
>  
>  
>  -- Original --
> From:  "Cheng Lian-3 [via Apache Spark Developers 
>
> List]";<[hidden   email]>;
>    Send time: Thursday, Feb 23, 2017 9:43 AM
>To: "Stan Zhai"<[hidden   email]>; 
>Subject:  Re: The driver hangs at DataFrame.rdd in
> Spark 2.1.0
>  
>  
>  
>  
> Just from the thread dump you provided, it seems that this  
> particular query plan jams our optimizer. However, it's also  
> possible that the driver just happened to be running optimizer  
> rules at that particular time point.
>  
>  
> Since query planning doesn't touch any actual data, could you  
> please try to minimize this query by replacing the actual  
> relations with temporary views derived from Scala local  
> collections? In this way, it would be much easier for others   to
> reproduce issue.
>  
> Cheng
>  
>  
>  On 2/22/17 5:16 PM, Stan Zhai   wrote:
>  
> Thanks for lian's reply.
>
>
>Here is the QueryPlan generated by Spark 1.6.2(I can't    
> get it in Spark 2.1.0):
>     ...   
> 
> 
>  
>  -- Original --
> Subject:  Re: The driver hangs at
> DataFrame.rdd in Spark 2.1.0
>  
>  
>  
>  
> What is the query plan? We had once observed query plans  
> that grow exponentially in iterative ML workloads and the  
> query planner hangs forever. For example, each iteration  
> combines 4 plan trees of the last iteration and forms a  
> larger plan tree. The size of the plan tree can easily   reach
> billions of nodes after 15 iterations.
>  
>  
>  On 2/22/17 9:29 AM, Stan Zhai   wrote:
>  
> Hi all,
>
>
>The driver hangs at DataFrame.rdd in Spark 2.1.0 when  
>   
> the DataFrame(SQL) is complex, Following thread dump of my
> driver:
>...
>   
>
>   
>  
>  
>  
> If you reply to this email, your message
> will be added to the discussion below:
>   
> http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21053.html
>  
> To start a new topic under Apache Spark Developers
> List, email   [hidden email]   
>To unsubscribe from Apache Spark Developers List, click here.
>NAML 
>
>
>
>View this message in context: Re: The driver hangs at
> DataFrame.rdd in Spark 2.1.0
>Sent from the Apache Spark Developers List mailing list
> archive at Nabble.com.
>   
>   

Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

2017-02-26 Thread Liang-Chi Hsieh
Hi Stan,

Looks like it is the same issue we are working to solve. Related PRs are:

https://github.com/apache/spark/pull/16998
https://github.com/apache/spark/pull/16785

You can take a look of those PRs and help review too. Thanks.



StanZhai wrote
> Hi all,
> 
> 
> The driver hangs at DataFrame.rdd in Spark 2.1.0 when the DataFrame(SQL)
> is complex, Following thread dump of my driver:
> 
> 
> org.apache.spark.sql.catalyst.expressions.AttributeReference.equals(namedExpressions.scala:230)
> org.apache.spark.sql.catalyst.expressions.IsNotNull.equals(nullExpressions.scala:312)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315)
> scala.collection.mutable.FlatHashTable$class.addEntry(FlatHashTable.scala:151)
> scala.collection.mutable.HashSet.addEntry(HashSet.scala:40)
> scala.collection.mutable.FlatHashTable$class.addElem(FlatHashTable.scala:139)
> scala.collection.mutable.HashSet.addElem(HashSet.scala:40)
> scala.collection.mutable.HashSet.$plus$eq(HashSet.scala:59)
> scala.collection.mutable.HashSet.$plus$eq(HashSet.scala:40)
> scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:59)
> scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:59)
> scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
> scala.collection.mutable.AbstractSet.$plus$plus$eq(Set.scala:46)
> scala.collection.mutable.HashSet.clone(HashSet.scala:83)
> scala.collection.mutable.HashSet.clone(HashSet.scala:40)
> org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus(ExpressionSet.scala:65)
> org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus(ExpressionSet.scala:50)
> scala.collection.SetLike$$anonfun$$plus$plus$1.apply(SetLike.scala:141)
> scala.collection.SetLike$$anonfun$$plus$plus$1.apply(SetLike.scala:141)
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
> scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316)
> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:972)
> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:972)
> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:972)
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
> scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:151)
> scala.collection.AbstractTraversable.$div$colon(Traversable.scala:104)
> scala.collection.SetLike$class.$plus$plus(SetLike.scala:141)
> org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus$plus(ExpressionSet.scala:50)
> org.apache.spark.sql.catalyst.plans.logical.UnaryNode$$anonfun$getAliasedConstraints$1.apply(LogicalPlan.scala:300)
> org.apache.spark.sql.catalyst.plans.logical.UnaryNode$$anonfun$getAliasedConstraints$1.apply(LogicalPlan.scala:297)
> scala.collection.immutable.List.foreach(List.scala:381)
> org.apache.spark.sql.catalyst.plans.logical.UnaryNode.getAliasedConstraints(LogicalPlan.scala:297)
> org.apache.spark.sql.catalyst.plans.logical.Project.validConstraints(basicLogicalOperators.scala:58)
> org.apache.spark.sql.catalyst.plans.QueryP

Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

2017-02-23 Thread StanZhai
Thanks for Cheng's help.


It must be something wrong with InferFiltersFromConstraints, I just removed 
InferFiltersFromConstraints from 
org/apache/spark/sql/catalyst/optimizer/Optimizer.scala to avoid this issue. I 
will analysis this issue with the method your provided.




-- Original --
From:  "Cheng Lian [via Apache Spark Developers 
List]";<ml-node+s1001551n21069...@n3.nabble.com>;
Send time: Friday, Feb 24, 2017 2:28 AM
To: "Stan Zhai"<m...@zhaishidan.cn>; 

Subject:  Re: The driver hangs at DataFrame.rdd in Spark 2.1.0



   
This one seems to be relevant, but it's already fixed in 2.1.0.
 
One way to debug is to turn on trace log and check how the   
analyzer/optimizer behaves.
 
 
 On 2/22/17 11:11 PM, StanZhai wrote:
 
Could this be related to 
https://issues.apache.org/jira/browse/SPARK-17733 ?

 
 
 
 -- Original --
From:  "Cheng Lian-3 [via Apache Spark Developers   
  List]";<[hidden   email]>;
   Send time: Thursday, Feb 23, 2017 9:43 AM
   To: "Stan Zhai"<[hidden   email]>; 
   Subject:  Re: The driver hangs at DataFrame.rdd in Spark 
2.1.0
 
 
 
 
Just from the thread dump you provided, it seems that this   particular 
query plan jams our optimizer. However, it's also   possible that the 
driver just happened to be running optimizer   rules at that particular 
time point.
 
 
Since query planning doesn't touch any actual data, could you   please 
try to minimize this query by replacing the actual   relations with 
temporary views derived from Scala local   collections? In this way, it 
would be much easier for others   to reproduce issue.
 
Cheng
 
 
 On 2/22/17 5:16 PM, Stan Zhai   wrote:
 
Thanks for lian's reply.
   
   
   Here is the QueryPlan generated by Spark 1.6.2(I can't 
get it in Spark 2.1.0):
...   


 
 -- Original ------
    Subject:  Re: The driver hangs at 
DataFrame.rdd in Spark 2.1.0
 
 
 
 
What is the query plan? We had once observed query plans   that 
grow exponentially in iterative ML workloads and the   query 
planner hangs forever. For example, each iteration   combines 4 
plan trees of the last iteration and forms a   larger plan tree. 
The size of the plan tree can easily   reach billions of nodes 
after 15 iterations.
 
 
 On 2/22/17 9:29 AM, Stan Zhai   wrote:
 
    Hi all,
   
   
   The driver hangs at DataFrame.rdd in Spark 2.1.0 when
 the DataFrame(SQL) is complex, Following thread dump of my 
driver:
   ...
  
   
  
 
 
 
If you reply to this email, your message will 
be added to the discussion below:
   
http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21053.html
 
To start a new topic under Apache Spark Developers List, 
email   [hidden email]   
   To unsubscribe from Apache Spark Developers List, click here.
   NAML 
   
   
       
   View this message in context: Re: The driver hangs at 
DataFrame.rdd in Spark 2.1.0
   Sent from the Apache Spark Developers List mailing list archive 
at Nabble.com.
  



If you reply to this email, your message will be added 
to the discussion below:

http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21069.html

To start a new topic under Apache Spark Developers 
List, email ml-node+s1001551n1...@n3.nabble.com 
To unsubscribe from Apache Spark Developers List, click here.
NAML



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21073.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

2017-02-23 Thread Cheng Lian

This one seems to be relevant, but it's already fixed in 2.1.0.

One way to debug is to turn on trace log and check how the 
analyzer/optimizer behaves.



On 2/22/17 11:11 PM, StanZhai wrote:
Could this be related to 
https://issues.apache.org/jira/browse/SPARK-17733 ?



-- Original --
*From: * "Cheng Lian-3 [via Apache Spark Developers List]";<[hidden 
email] >;

*Send time:* Thursday, Feb 23, 2017 9:43 AM
*To:* "Stan Zhai"<[hidden email] 
>;

*Subject: * Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

Just from the thread dump you provided, it seems that this particular 
query plan jams our optimizer. However, it's also possible that the 
driver just happened to be running optimizer rules at that particular 
time point.


Since query planning doesn't touch any actual data, could you please 
try to minimize this query by replacing the actual relations with 
temporary views derived from Scala local collections? In this way, it 
would be much easier for others to reproduce issue.


Cheng


On 2/22/17 5:16 PM, Stan Zhai wrote:

Thanks for lian's reply.

Here is the QueryPlan generated by Spark 1.6.2(I can't get it in 
Spark 2.1.0):

|...|
||

-- Original ------
*Subject: * Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

What is the query plan? We had once observed query plans that grow 
exponentially in iterative ML workloads and the query planner hangs 
forever. For example, each iteration combines 4 plan trees of the 
last iteration and forms a larger plan tree. The size of the plan 
tree can easily reach billions of nodes after 15 iterations.



On 2/22/17 9:29 AM, Stan Zhai wrote:

Hi all,

The driver hangs at DataFrame.rdd in Spark 2.1.0 when the 
DataFrame(SQL) is complex, Following thread dump of my driver:

...







If you reply to this email, your message will be added to the 
discussion below:
http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21053.html 

To start a new topic under Apache Spark Developers List, email [hidden 
email] 

To unsubscribe from Apache Spark Developers List, click here.
NAML 
<http://apache-spark-developers-list.1001551.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> 




View this message in context: Re: The driver hangs at DataFrame.rdd in 
Spark 2.1.0 
<http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21054.html>
Sent from the Apache Spark Developers List mailing list archive 
<http://apache-spark-developers-list.1001551.n3.nabble.com/> at 
Nabble.com.




Re: The driver hangs at DataFrame.rdd in Spark 2.1.0

2017-02-22 Thread StanZhai
Could this be related to https://issues.apache.org/jira/browse/SPARK-17733 ?




-- Original --
From:  "Cheng Lian-3 [via Apache Spark Developers 
List]";<ml-node+s1001551n2105...@n3.nabble.com>;
Send time: Thursday, Feb 23, 2017 9:43 AM
To: "Stan Zhai"<m...@zhaishidan.cn>; 

Subject:  Re: The driver hangs at DataFrame.rdd in Spark 2.1.0



   
Just from the thread dump you provided, it seems that this   particular 
query plan jams our optimizer. However, it's also   possible that the 
driver just happened to be running optimizer   rules at that particular 
time point.
 
 
Since query planning doesn't touch any actual data, could you   please try 
to minimize this query by replacing the actual   relations with temporary 
views derived from Scala local   collections? In this way, it would be much 
easier for others to   reproduce issue.
 
Cheng
 
 
 On 2/22/17 5:16 PM, Stan Zhai wrote:
 
Thanks for lian's reply.
   
   
   Here is the QueryPlan generated by Spark 1.6.2(I can't get it in 
Spark 2.1.0):
...   


 
 -- Original --
        Subject:  Re: The driver hangs at DataFrame.rdd 
in Spark 2.1.0
 
 
 
 
What is the query plan? We had once observed query plans that   grow 
exponentially in iterative ML workloads and the query   planner hangs 
forever. For example, each iteration combines 4   plan trees of the 
last iteration and forms a larger plan tree.   The size of the plan 
tree can easily reach billions of nodes   after 15 iterations.
 
 
 On 2/22/17 9:29 AM, Stan Zhai   wrote:
 
Hi all,
   
   
   The driver hangs at DataFrame.rdd in Spark 2.1.0 when the
 DataFrame(SQL) is complex, Following thread dump of my driver:
   ...
  
   
  



If you reply to this email, your message will be added 
to the discussion below:

http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21053.html

To start a new topic under Apache Spark Developers 
List, email ml-node+s1001551n1...@n3.nabble.com 
To unsubscribe from Apache Spark Developers List, click here.
NAML



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Re-The-driver-hangs-at-DataFrame-rdd-in-Spark-2-1-0-tp21052p21054.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

The driver hangs at DataFrame.rdd in Spark 2.1.0

2017-02-22 Thread StanZhai
Hi all,


The driver hangs at DataFrame.rdd in Spark 2.1.0 when the DataFrame(SQL) is 
complex, Following thread dump of my driver:


org.apache.spark.sql.catalyst.expressions.AttributeReference.equals(namedExpressions.scala:230)
 
org.apache.spark.sql.catalyst.expressions.IsNotNull.equals(nullExpressions.scala:312)
 org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
org.apache.spark.sql.catalyst.expressions.Or.equals(predicates.scala:315) 
scala.collection.mutable.FlatHashTable$class.addEntry(FlatHashTable.scala:151) 
scala.collection.mutable.HashSet.addEntry(HashSet.scala:40) 
scala.collection.mutable.FlatHashTable$class.addElem(FlatHashTable.scala:139) 
scala.collection.mutable.HashSet.addElem(HashSet.scala:40) 
scala.collection.mutable.HashSet.$plus$eq(HashSet.scala:59) 
scala.collection.mutable.HashSet.$plus$eq(HashSet.scala:40) 
scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:59)
 
scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:59)
 scala.collection.mutable.HashSet.foreach(HashSet.scala:78) 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
scala.collection.mutable.AbstractSet.$plus$plus$eq(Set.scala:46) 
scala.collection.mutable.HashSet.clone(HashSet.scala:83) 
scala.collection.mutable.HashSet.clone(HashSet.scala:40) 
org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus(ExpressionSet.scala:65)
 
org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus(ExpressionSet.scala:50)
 scala.collection.SetLike$$anonfun$$plus$plus$1.apply(SetLike.scala:141) 
scala.collection.SetLike$$anonfun$$plus$plus$1.apply(SetLike.scala:141) 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) 
scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:972) 
scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:972) 
scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:972) 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) 
scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) 
scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:151) 
scala.collection.AbstractTraversable.$div$colon(Traversable.scala:104) 
scala.collection.SetLike$class.$plus$plus(SetLike.scala:141) 
org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus$plus(ExpressionSet.scala:50)
 
org.apache.spark.sql.catalyst.plans.logical.UnaryNode$$anonfun$getAliasedConstraints$1.apply(LogicalPlan.scala:300)
 
org.apache.spark.sql.catalyst.plans.logical.UnaryNode$$anonfun$getAliasedConstraints$1.apply(LogicalPlan.scala:297)
 scala.collection.immutable.List.foreach(List.scala:381) 
org.apache.spark.sql.catalyst.plans.logical.UnaryNode.getAliasedConstraints(LogicalPlan.scala:297)
 
org.apache.spark.sql.catalyst.plans.logical.Project.validConstraints(basicLogicalOperators.scala:58)
 
org.apache.spark.sql.catalyst.plans.QueryPlan.constraints$lzycompute(QueryPlan.scala:187)
 => holding 
Monitor(org.apache.spark.sql.catalyst.plans.logical.Join@1365611745}) 
org.apache.spark.sql.catalyst.plans.QueryPlan.constraints(QueryPlan.scala:187) 
org.apache.spark.sql.catalyst.plans.logical.Project.validConstraints(basicLogicalOperators.scala:58)
 
org.apache.spark.sql.catalyst.plans.QueryPlan.constraints$lzycompute(QueryPlan.scala:187)
 => holding 
Monitor(org.apache.spark.sql.catalyst.plans.logical.Join@1365