Github user jinxing64 commented on the issue: https://github.com/apache/spark/pull/18031 To resolve the comments in https://github.com/apache/spark/pull/16989 : >minimum size before we consider something a large block : if average is 10kb, and some blocks are > 20kb, spilling them to disk would be highly suboptimal. >One edge-case to consider is the situation where every shuffle block is just over this threshold: in this case HighlyCompressedMapStatus won't really be doing any compression. I propose two configs: `spark.shuffle.accurateBlockThreshold` and `spark.shuffle.accurateBlockThresholdByTimesAverage` , sizes of blocks above both will be record accurately.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org