[ 
https://issues.apache.org/jira/browse/SPARK-17709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534417#comment-15534417
 ] 

Dilip Biswal commented on SPARK-17709:
--------------------------------------

[~smilegator] Hi Sean, I tried it on my master branch and don't see the 
exception.

{code}
test("join issue") {
   withTable("tbl") {
     sql("CREATE TABLE tbl(key1 int, key2 int, totalprice int, itemcount int)")
     sql("insert into tbl values (1, 1, 1, 1)")
     val d1 = sql("select * from tbl")
     val df1 = d1.groupBy("key1","key2")
       .agg(avg("totalprice").as("avgtotalprice"))
     val df2 = d1.groupBy("key1","key2")
       .agg(avg("itemcount").as("avgqty"))
     df1.join(df2, Seq("key1","key2")).show()
   }
 }

Output

+----+----+-------------+------+
|key1|key2|avgtotalprice|avgqty|
+----+----+-------------+------+
|   1|   1|          1.0|   1.0|
+----+----+-------------+------+
{code}



> spark 2.0 join - column resolution error
> ----------------------------------------
>
>                 Key: SPARK-17709
>                 URL: https://issues.apache.org/jira/browse/SPARK-17709
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>            Reporter: Ashish Shrowty
>              Labels: easyfix
>
> If I try to inner-join two dataframes which originated from the same initial 
> dataframe that was loaded using spark.sql() call, it results in an error -
> // reading from Hive .. the data is stored in Parquet format in Amazon S3
> val d1 = spark.sql("select * from <hivetable>")  
> val df1 = d1.groupBy("key1","key2")
>           .agg(avg("totalprice").as("avgtotalprice"))
> val df2 = d1.groupBy("key1","key2")
>           .agg(avg("itemcount").as("avgqty")) 
> df1.join(df2, Seq("key1","key2")) gives error -
> org.apache.spark.sql.AnalysisException: using columns ['key1,'key2] can 
> not be resolved given input columns: [key1, key2, avgtotalprice, avgqty];
> If the same Dataframe is initialized via spark.read.parquet(), the above code 
> works. This same code above worked with Spark 1.6.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to