[ https://issues.apache.org/jira/browse/SPARK-16938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15561067#comment-15561067 ]
Dongjoon Hyun edited comment on SPARK-16938 at 10/10/16 2:30 AM: ----------------------------------------------------------------- This issue is not in progress anymore. I closed my PR. Please take this if anyone is interested in. was (Author: dongjoon): This issue is not in progress anymore. I close my PR. Please take this if anyone is interested in. > Cannot resolve column name after a join > --------------------------------------- > > Key: SPARK-16938 > URL: https://issues.apache.org/jira/browse/SPARK-16938 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0 > Reporter: Mathieu D > Priority: Minor > > Found a change of behavior on spark-2.0.0, which breaks a query in our code > base. > The following works on previous spark versions, 1.6.1 up to 2.0.0-preview : > {code} > val dfa = Seq((1, 2), (2, 3)).toDF("id", "a").alias("dfa") > val dfb = Seq((1, 0), (1, 1)).toDF("id", "b").alias("dfb") > dfa.join(dfb, dfa("id") === dfb("id")).dropDuplicates(Array("dfa.id", > "dfb.id")) > {code} > but fails with spark-2.0.0 with the exception : > {code} > Cannot resolve column name "dfa.id" among (id, a, id, b); > org.apache.spark.sql.AnalysisException: Cannot resolve column name "dfa.id" > among (id, a, id, b); > at > org.apache.spark.sql.Dataset$$anonfun$dropDuplicates$1$$anonfun$36$$anonfun$apply$12.apply(Dataset.scala:1819) > at > org.apache.spark.sql.Dataset$$anonfun$dropDuplicates$1$$anonfun$36$$anonfun$apply$12.apply(Dataset.scala:1819) > at scala.Option.getOrElse(Option.scala:121) > at > org.apache.spark.sql.Dataset$$anonfun$dropDuplicates$1$$anonfun$36.apply(Dataset.scala:1818) > at > org.apache.spark.sql.Dataset$$anonfun$dropDuplicates$1$$anonfun$36.apply(Dataset.scala:1817) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) > at > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:245) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.spark.sql.Dataset$$anonfun$dropDuplicates$1.apply(Dataset.scala:1817) > at > org.apache.spark.sql.Dataset$$anonfun$dropDuplicates$1.apply(Dataset.scala:1814) > at org.apache.spark.sql.Dataset.withTypedPlan(Dataset.scala:2594) > at org.apache.spark.sql.Dataset.dropDuplicates(Dataset.scala:1814) > at org.apache.spark.sql.Dataset.dropDuplicates(Dataset.scala:1840) > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org