[ 
https://issues.apache.org/jira/browse/SPARK-54449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifeng Wang updated SPARK-54449:
--------------------------------
    Attachment: image-2025-11-21-22-03-25-116.png

> Spark UI display unexpected "Storage Memory" capacity when 
> spark.memory.offHeap.enabled is false
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-54449
>                 URL: https://issues.apache.org/jira/browse/SPARK-54449
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>    Affects Versions: 3.5.2
>         Environment: Tested with Spark v3.5.2. Steps to reproduce included in 
> below Description section.
>  
>  
>            Reporter: Yifeng Wang
>            Priority: Critical
>         Attachments: image-2025-11-21-21-49-17-534.png, 
> image-2025-11-21-21-49-34-881.png, image-2025-11-21-21-51-10-957.png, 
> image-2025-11-21-21-51-39-197.png, image-2025-11-21-21-53-37-637.png, 
> image-2025-11-21-21-53-59-322.png, image-2025-11-21-21-54-03-317.png, 
> image-2025-11-21-22-00-03-629.png, image-2025-11-21-22-03-25-116.png, 
> image-2025-11-21-22-03-34-768.png
>
>
> Dear Spark Community:
> *Current Behavior:* In the Spark UI (Executors tab) and Spark History Server, 
> the "Storage Memory" column displays the total capacity as the sum of On-Heap 
> Storage Memory + Off-Heap Storage Memory. Even when the configuration 
> {{spark.memory.offHeap.enabled}} is explicitly set to {{{}false{}}}, the UI 
> still adds the value of {{spark.memory.offHeap.size}} to the total displayed 
> capacity.
> *Expected Behavior:* Perhaps when {{spark.memory.offHeap.enabled}} is set to 
> {{{}false{}}}, the "Storage Memory" total in the UI should *only* reflect the 
> On-Heap Storage Memory? The {{spark.memory.offHeap.size}} configuration 
> should be ignored in the UI display calculation, similar to how it is ignored 
> in YARN resource allocation logic.
>  
> *Personal Understandings:* The issue stems from how {{BlockManager}} reports 
> memory to the {{{}BlockManagerMaster{}}}. {{UnifiedMemoryManager}} 
> initializes {{maxOffHeapMemory}} based on the configuration 
> {{spark.memory.offHeap.size}} regardless of the {{enabled}} flag. Then 
> {{BlockManager}} reads this value via 
> {{memoryManager.maxOffHeapStorageMemory}} and passes it to the 
> {{registerBlockManager}} RPC call. {{At register}} method, the total memory 
> is calculated by simply adding on-heap and off-heap values without checking 
> if off-heap is enabled.
> At ther parts of the codebase, such as {{Client.scala}} (YARN) and 
> {{{}ResourceProfile.scala{}}}, which correctly utilize 
> {{Utils.checkOffHeapEnabled}} to ensure off-heap memory is treated as 0 when 
> the feature is disabled.
> *Proposed Fix:*  Perhaps we need enforce the {{checkOffHeapEnabled}} logic 
> before aggregating the total memory for the UI?
> !image-2025-11-21-21-49-34-881.png!
>  
> *Steps to Reproduce:*
>  # Configure a Spark application with the following settings:
>  * 
>  ** {{spark.memory.offHeap.enabled=false}}
>  * 
>  ** {{spark.memory.offHeap.size=10g}} (or any non-zero value)
>  
> *Screenshots:*
> 1. Comment out configs
>  
> Got: (Storage Memory =2 GiB)
>  
>  
>  
> 2. Enable configs
>  
> Got: (Storage Memory =12 GiB)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to