Github user liufengdb commented on a diff in the pull request: https://github.com/apache/spark/pull/19394#discussion_r142290483 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala --- @@ -280,13 +280,20 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ results.toArray } + private[spark] def executeCollectIterator(): (Long, Iterator[InternalRow]) = { + val countsAndBytes = getByteArrayRdd().collect() --- End diff -- This still fetches all the compressed rows to the driver, before building the hashed relation. Ideally, you should fetch the rows from executors incrementally.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org