Github user yhuai commented on a diff in the pull request: https://github.com/apache/spark/pull/5208#discussion_r28081373 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala --- @@ -163,6 +178,40 @@ case class HashPartitioning(expressions: Seq[Expression], numPartitions: Int) } /** + * Represents a partitioning where rows are split up across partitions based on the hash + * of `expressions`. All rows where `expressions` evaluate to the same values are guaranteed to be + * in the same partition. And rows within the same partition are sorted by the expressions. + */ +case class HashSortedPartitioning(expressions: Seq[Expression], numPartitions: Int) + extends Expression + with Partitioning { + + override def children = expressions + override def nullable = false + override def dataType = IntegerType + + private[this] lazy val clusteringSet = expressions.toSet + + override def satisfies(required: Distribution): Boolean = required match { + case UnspecifiedDistribution => true + case ClusteredOrderedDistribution(requiredClustering) => + clusteringSet.subsetOf(requiredClustering.toSet) --- End diff -- We need to add a comment at here. We need to have a comment at here to remind the reader that `satisfies` does not guarantee the ordering of rows in a partition. Because the row ordering in a partition of a `HashSortedPartitioning` may not meet the required ordering of a `ClusteredOrderedDistribution`, we need to add a local sort operator. But, where is the rule to add the sort operator (I could not find it)?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org