Github user srowen commented on a diff in the pull request: https://github.com/apache/spark/pull/15041#discussion_r79419305 --- Diff: core/src/main/scala/org/apache/spark/util/collection/Utils.scala --- @@ -30,10 +34,22 @@ private[spark] object Utils { * Returns the first K elements from the input as defined by the specified implicit Ordering[T] * and maintains the ordering. */ - def takeOrdered[T](input: Iterator[T], num: Int)(implicit ord: Ordering[T]): Iterator[T] = { - val ordering = new GuavaOrdering[T] { - override def compare(l: T, r: T): Int = ord.compare(l, r) + def takeOrdered[T](input: Iterator[T], num: Int, + ser: Serializer = SparkEnv.get.serializer)(implicit ord: Ordering[T]): Iterator[T] = { + val context = TaskContext.get() + if (context == null) { + val ordering = new GuavaOrdering[T] { + override def compare(l: T, r: T): Int = ord.compare(l, r) + } + ordering.leastOf(input.asJava, num).iterator.asScala + } else { + val sorter = + new ExternalSorter[T, Any, Any](context, None, None, Some(ord), ser) + sorter.insertAll(input.map(x => (x, null))) --- End diff -- Well, it only resorts to sorting when _k_ is more than `Int.MaxValue / 2` because the normal algorithm can't run. This is an extreme case, where "take" isn't very sensible to begin with. Generally, you would never want to sort the elements to get the top k. The Guava class is doing what the `BoundedPriorityQueue` internal class does, and keeps a priority queue of the top k and only sorts at the end. I think the current approach is much better than sorting everything, and because you have to return an array of k elements in several cases, it's not really solving the OOM problem.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org