Github user scwf commented on a diff in the pull request: https://github.com/apache/spark/pull/7417#discussion_r34749244 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/joins/CartesianProduct.scala --- @@ -34,7 +34,15 @@ case class CartesianProduct(left: SparkPlan, right: SparkPlan) extends BinaryNod val leftResults = left.execute().map(_.copy()) val rightResults = right.execute().map(_.copy()) - leftResults.cartesian(rightResults).mapPartitions { iter => + val cartesianRdd = if (leftResults.partitions.size > rightResults.partitions.size) { + rightResults.cartesian(leftResults).mapPartitions { iter => + iter.map(tuple => (tuple._2, tuple._1)) + } + } else { + leftResults.cartesian(rightResults) + } + + cartesianRdd.mapPartitions { iter => val joinedRow = new JoinedRow --- End diff -- yes, use partition size here is not accurate, see a rdd with 100 partitions, and each partition has one record and a rdd with 10 partition and each partition has 100 million records, use the method above will cause more scan from hdfs
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org