Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16970#discussion_r102290771
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
    @@ -1996,7 +1996,7 @@ class Dataset[T] private[sql](
       def dropDuplicates(colNames: Seq[String]): Dataset[T] = withTypedPlan {
         val resolver = sparkSession.sessionState.analyzer.resolver
         val allColumns = queryExecution.analyzed.output
    -    val groupCols = colNames.flatMap { colName =>
    +    val groupCols = colNames.toSet.toSeq.flatMap { (colName: String) =>
    --- End diff --
    
    Fixed an issue that `groupCols` may contain duplicated columns. Without 
this fix, `org.apache.spark.sql.DatasetSuite.dropDuplicates: columns with same 
column name` will fail because the hash keys are different.
    
    Before my change, in ``org.apache.spark.sql.DatasetSuite.dropDuplicates: 
columns with same column name` test, it has two columns both called `_2`, so 
`groupCols` will contain 4 columns. However, it will be optimized to 2 columns.
    
    After creating a new Deduplication operator, the optimization rule doesn't 
apply because it's not an Aggregate. So it will still use 4 columns as the 
group keys.
    
    This exposes one potential breaking change: some optimization rules may not 
work after this change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to