HyukjinKwon commented on code in PR #52153:
URL: https://github.com/apache/spark/pull/52153#discussion_r2309310854
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala:
##########
@@ -946,3 +946,24 @@ case class ShuffleSpecCollection(specs: Seq[ShuffleSpec])
extends ShuffleSpec {
specs.head.numPartitions
}
}
+
+/**
+ * Represents a partitioning where partition IDs are passed through directly
from the
+ * DirectShufflePartitionID expression. This partitioning scheme is used when
users
+ * want to directly control partition placement rather than using hash-based
partitioning.
+ *
+ * This partitioning maps directly to the PartitionIdPassthrough RDD
partitioner.
+ */
+case class ShufflePartitionIdPassThrough(
Review Comment:
Nope, it will not reuse or remove shuffles. This is more to replace RDD's
Partitioner API so people can completely migrate to DataFrame API. For the fact
of performance and efficiency, it won't be super useful.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]