[ 
https://issues.apache.org/jira/browse/SPARK-43496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Yerenkow updated SPARK-43496:
---------------------------------------
    Affects Version/s: 3.5.2
                       3.4.4
                           (was: 3.4.0)

> Have a separate config for Memory limits for kubernetes pods
> ------------------------------------------------------------
>
>                 Key: SPARK-43496
>                 URL: https://issues.apache.org/jira/browse/SPARK-43496
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 3.5.2, 3.4.4
>            Reporter: Alexander Yerenkow
>            Priority: Major
>              Labels: pull-request-available
>
> Whole allocated memory to JVM is set into pod resources as both request and 
> limits.
> This means there's not a way to use more memory for burst-like jobs in a 
> shared environment.
> For example, if spark job uses external process (outside of JVM) to access 
> data, a bit of extra memory required for that, and having configured higher 
> limits for mem could be of use.
> Another thought here - have a way to configure different JVM/ pod memory 
> request also could be a valid use case.
>  
> Github PR: [https://github.com/apache/spark/pull/41067]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to