[ 
https://issues.apache.org/jira/browse/SPARK-25150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16632274#comment-16632274
 ] 

Peter Toth edited comment on SPARK-25150 at 9/28/18 7:28 PM:
-------------------------------------------------------------

[~nchammas], sorry for the late reply.

There is only one issue here. Please see zombie-analysis.py, it contains 2 
joins and both joins define the condition explicitly, so setting 
spark.sql.crossJoin.enabled=true {color:#333333}should not have any 
effect.{color}

{color:#333333}The root cause of the error you see when 
spark.sql.crossJoin.enabled=false (default) and the incorrect results when 
spark.sql.crossJoin.enabled=true is the same, the join condition is handled 
incorrectly.{color}

{color:#333333}Please see my PR's description for further details: 
[https://github.com/apache/spark/pull/22318]{color}

 


was (Author: petertoth):
[~nchammas], sorry for the late reply.

There is only one issue here. Please see zombie-analysis.py, it contains 2 
joins and both joins define the condition explicitly, so setting 
spark.sql.crossJoin.enabled=true {color:#333333}should not have any 
effect.{color}

{color:#333333}Simply the SQL statement should not fail, please see my PR's 
description for further details: 
[https://github.com/apache/spark/pull/22318]{color}

 

> Joining DataFrames derived from the same source yields confusing/incorrect 
> results
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-25150
>                 URL: https://issues.apache.org/jira/browse/SPARK-25150
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: Nicholas Chammas
>            Priority: Major
>         Attachments: expected-output.txt, 
> output-with-implicit-cross-join.txt, output-without-implicit-cross-join.txt, 
> persons.csv, states.csv, zombie-analysis.py
>
>
> I have two DataFrames, A and B. From B, I have derived two additional 
> DataFrames, B1 and B2. When joining A to B1 and B2, I'm getting a very 
> confusing error:
> {code:java}
> Join condition is missing or trivial.
> Either: use the CROSS JOIN syntax to allow cartesian products between these
> relations, or: enable implicit cartesian products by setting the configuration
> variable spark.sql.crossJoin.enabled=true;
> {code}
> Then, when I configure "spark.sql.crossJoin.enabled=true" as instructed, 
> Spark appears to give me incorrect answers.
> I am not sure if I am missing something obvious, or if there is some kind of 
> bug here. The "join condition is missing" error is confusing and doesn't make 
> sense to me, and the seemingly incorrect output is concerning.
> I've attached a reproduction, along with the output I'm seeing with and 
> without the implicit cross join enabled.
> I realize the join I've written is not "correct" in the sense that it should 
> be left outer join instead of an inner join (since some of the aggregates are 
> not available for all states), but that doesn't explain Spark's behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to