JoshRosen commented on code in PR #47192:
URL: https://github.com/apache/spark/pull/47192#discussion_r1678527386


##########
core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java:
##########
@@ -202,6 +212,14 @@ public long acquireExecutionMemory(long required, 
MemoryConsumer requestingConsu
         logger.debug("Task {} acquired {} for {}", taskAttemptId, 
Utils.bytesToString(got),
           requestingConsumer);
       }
+
+      if (mode == MemoryMode.OFF_HEAP) {
+        peakOffHeapMemory = Math.max(peakOffHeapMemory,
+          memoryManager.getOffHeapExecutionMemoryUsageForTask(taskAttemptId));

Review Comment:
   Although it's true that there might a bit of redundancy in counting in both 
places, it seems like there may be reasonable performance justifications for 
introducing such redundancy.
   
   I don't think it will end up being that much additional code:
   
   Within TaskMemoryManager, I think we'd just need to add a pair of `long` 
counter fields, one for on-heap and another for off-heap,` then increment them 
in `acquireExecutionMemory` and decrement them in `releaseExecutionMemory` 
(since those are narrow waists).
   
   Maybe we should give that a try and see how much net code it ends up adding?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to