[ 
https://issues.apache.org/jira/browse/SPARK-54449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifeng Wang updated SPARK-54449:
--------------------------------
    Environment: 
Tested with Spark v3.5.2.

*Steps to Reproduce:*
 # Configure a Spark application with the following settings:

 * 
 ** {{spark.memory.offHeap.enabled=false}}

 * 
 ** {{spark.memory.offHeap.size=10g}} (or any non-zero value)

 

*Screenshots:*
1. Comment out configs

 

Got: (Storage Memory =2 GiB)

 

 

 

2. Enable configs

 

Got: (Storage Memory =12 GiB)

 

 

 

  was:
Tested with Spark v3.5.2.

*Steps to Reproduce:*
 # Configure a Spark application with the following settings:

 * 
 ** {{spark.memory.offHeap.enabled=false}}

 * 
 ** {{spark.memory.offHeap.size=10g}} (or any non-zero value)

 

*Screenshots:*
1. Comment out configs

!image-2025-11-21-21-49-17-534.png!

Got: (Storage Memory =2 GiB)

 

 

 

2. Enable configs

 

Got: (Storage Memory =12 GiB)

 

 

 


> Spark UI display unexpected "Storage Memory" capacity when 
> spark.memory.offHeap.enabled is false
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-54449
>                 URL: https://issues.apache.org/jira/browse/SPARK-54449
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>    Affects Versions: 3.5.2
>         Environment: Tested with Spark v3.5.2.
> *Steps to Reproduce:*
>  # Configure a Spark application with the following settings:
>  * 
>  ** {{spark.memory.offHeap.enabled=false}}
>  * 
>  ** {{spark.memory.offHeap.size=10g}} (or any non-zero value)
>  
> *Screenshots:*
> 1. Comment out configs
>  
> Got: (Storage Memory =2 GiB)
>  
>  
>  
> 2. Enable configs
>  
> Got: (Storage Memory =12 GiB)
>  
>  
>  
>            Reporter: Yifeng Wang
>            Priority: Critical
>         Attachments: image-2025-11-21-21-49-17-534.png, 
> image-2025-11-21-21-49-34-881.png
>
>
> Dear Spark Community:
> *Current Behavior:* In the Spark UI (Executors tab) and Spark History Server, 
> the "Storage Memory" column displays the total capacity as the sum of On-Heap 
> Storage Memory + Off-Heap Storage Memory. Even when the configuration 
> {{spark.memory.offHeap.enabled}} is explicitly set to {{{}false{}}}, the UI 
> still adds the value of {{spark.memory.offHeap.size}} to the total displayed 
> capacity.
> *Expected Behavior:* Perhaps when {{spark.memory.offHeap.enabled}} is set to 
> {{{}false{}}}, the "Storage Memory" total in the UI should *only* reflect the 
> On-Heap Storage Memory? The {{spark.memory.offHeap.size}} configuration 
> should be ignored in the UI display calculation, similar to how it is ignored 
> in YARN resource allocation logic.
>  
> *Personal Understandings:* The issue stems from how {{BlockManager}} reports 
> memory to the {{{}BlockManagerMaster{}}}. {{UnifiedMemoryManager}} 
> initializes {{maxOffHeapMemory}} based on the configuration 
> {{spark.memory.offHeap.size}} regardless of the {{enabled}} flag. Then 
> {{BlockManager}} reads this value via 
> {{memoryManager.maxOffHeapStorageMemory}} and passes it to the 
> {{registerBlockManager}} RPC call. {{At register}} method, the total memory 
> is calculated by simply adding on-heap and off-heap values without checking 
> if off-heap is enabled.
> At ther parts of the codebase, such as {{Client.scala}} (YARN) and 
> {{{}ResourceProfile.scala{}}}, which correctly utilize 
> {{Utils.checkOffHeapEnabled}} to ensure off-heap memory is treated as 0 when 
> the feature is disabled.
> *Proposed Fix:*  Perhaps we need enforce the {{checkOffHeapEnabled}} logic 
> before aggregating the total memory for the UI?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to