PetarVasiljevic-DB commented on code in PR #51686:
URL: https://github.com/apache/spark/pull/51686#discussion_r2238949678


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/V2ScanRelationPushDown.scala:
##########
@@ -140,39 +140,45 @@ object V2ScanRelationPushDown extends Rule[LogicalPlan] 
with PredicateHelper {
       val leftSideRequiredColumnNames = 
getRequiredColumnNames(leftProjections, leftHolder)
       val rightSideRequiredColumnNames = 
getRequiredColumnNames(rightProjections, rightHolder)
 
-      // Alias the duplicated columns from left side of the join. We are 
creating the
-      // Map[String, Int] to tell how many times each column name has occured 
within one side.
-      val leftSideNameCounts: Map[String, Int] =
-        
leftSideRequiredColumnNames.groupBy(identity).view.mapValues(_.size).toMap
-      val rightSideNameCounts: Map[String, Int] =
-        
rightSideRequiredColumnNames.groupBy(identity).view.mapValues(_.size).toMap
-      // It's more performant to call contains on Set than on Seq
-      val rightSideColumnNamesSet = rightSideRequiredColumnNames.toSet
-
-      val leftSideRequiredColumnsWithAliases = leftSideRequiredColumnNames.map 
{ name =>
-        val aliasName =
-          if (leftSideNameCounts(name) > 1 || 
rightSideColumnNamesSet.contains(name)) {
-            generateJoinOutputAlias(name)
+      def generateColumnAliasesForDuplicatedName(
+        leftColumns: Array[String],
+        rightColumns: Array[String]
+      ): (Array[SupportsPushDownJoin.ColumnWithAlias],
+        Array[SupportsPushDownJoin.ColumnWithAlias]) = {
+        //  Count occurrences of each column name across both sides to 
identify duplicates.
+        val allRequiredColumnNames = leftSideRequiredColumnNames ++ 
rightSideRequiredColumnNames
+        val allNameCounts: Map[String, Int] =
+          allRequiredColumnNames.groupBy(identity).view.mapValues(_.size).toMap
+        // Use Set for O(1) lookups when checking existing column names, claim 
all names
+        // that appears only once to ensure they have highest priority.
+        val allClaimedAliases = mutable.HashSet.empty ++ 
allNameCounts.filter(_._2 == 1).keySet
+
+        def processColumn(name: String): SupportsPushDownJoin.ColumnWithAlias 
= {
+          // Ensure a name that appears only once does not require an alias.
+          if (allNameCounts(name) == 1) {
+            new SupportsPushDownJoin.ColumnWithAlias(name, null)
           } else {
-            null
+            var attempt = 0
+            // Generate candidate alias: use original name for the first 
attempt, then append
+            // suffix for more attempts.
+            var candidate = name
+            // Ensure candidate alias is unique by checking against existing 
names.
+            while (allClaimedAliases.contains(candidate)) {

Review Comment:
   this can become too expensive, no? The complexity is `O(columnCount ^ 2)` in 
worst case and I have seen users having 1000 of columns in their table. So for 
the following worst-case scenario:
   `col, col_0, col_1, col_2, .... col_999` join `col, col_0, col_1, col_2, 
.... col_999`, this would have million operations.
   
   what are your thoughts?
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to