Github user mridulm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17276#discussion_r108103956
  
    --- Diff: 
core/src/main/java/org/apache/spark/shuffle/sort/BypassMergeSortShuffleWriter.java
 ---
    @@ -169,6 +173,36 @@ public void write(Iterator<Product2<K, V>> records) 
throws IOException {
           }
         }
         mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), 
partitionLengths);
    +    if (mapStatus instanceof HighlyCompressedMapStatus) {
    +      HighlyCompressedMapStatus hc = (HighlyCompressedMapStatus) mapStatus;
    +      long underestimatedBlocksSize = 0L;
    +      for (int i = 0; i < partitionLengths.length; i++) {
    +        if (partitionLengths[i] > mapStatus.getSizeForBlock(i)) {
    +          underestimatedBlocksSize += partitionLengths[i];
    +        }
    +      }
    +      writeMetrics.incUnderestimatedBlocksSize(underestimatedBlocksSize);
    +      if (logger.isDebugEnabled() && partitionLengths.length > 0) {
    +        int underestimatedBlocksNum = 0;
    +        // Distribution of sizes in MapStatus.
    +        double[] cp = new double[partitionLengths.length];
    +        for (int i = 0; i < partitionLengths.length; i++) {
    +          cp[i] = partitionLengths[i];
    +          if (partitionLengths[i] > mapStatus.getSizeForBlock(i)) {
    +            underestimatedBlocksNum++;
    +          }
    +        }
    +        Distribution distribution = new Distribution(cp, 0, cp.length);
    +        double[] probabilities = {0.0, 0.25, 0.5, 0.75, 1.0};
    +        String distributionStr = 
distribution.getQuantiles(probabilities).mkString(", ");
    +        logger.debug("For task {}.{} in stage {} (TID {}), the block sizes 
in MapStatus are " +
    +          "inaccurate (average is {}, {} blocks underestimated, size of 
underestimated is {})," +
    +          " distribution at the given probabilities(0, 0.25, 0.5, 0.75, 
1.0) is {}.",
    +          taskContext.partitionId(), taskContext.attemptNumber(), 
taskContext.stageId(),
    +          taskContext.taskAttemptId(), hc.getAvgSize(),
    +          underestimatedBlocksNum, underestimatedBlocksSize, 
distributionStr);
    +      }
    +    }
    --- End diff --
    
    The value is not accurate - it is a 1og 1.1 'compression' which converts 
the long size to a byte : and caps the value at 255.
    
    So there are two errors introduced; it over-estimates the actual block size 
when compressed value < 255 [1] (which is something this PR currently ignores), 
when block size goes above 34k mb or so, it under estimates the block size 
(which is higher than what spark currently supports due to 2G limitation).
    
    
    [1] I did not realize it always over-estimates; if the current PR is 
targetting only blocks which are under estimated; I would agree that not 
handling `CompressedMapStatus` for time being might be ok - though would be 
good to add a comment to that effect on 'why' we dont need to handle it.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to