[ 
https://issues.apache.org/jira/browse/SPARK-48673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao resolved SPARK-48673.
------------------------------
    Resolution: Information Provided

Please use the dev or user mailing lists for questions 
https://spark.apache.org/community.html

> Scheduling Across Applications in k8s mode 
> -------------------------------------------
>
>                 Key: SPARK-48673
>                 URL: https://issues.apache.org/jira/browse/SPARK-48673
>             Project: Spark
>          Issue Type: Question
>          Components: k8s, Kubernetes, Scheduler, Spark Shell, Spark Submit
>    Affects Versions: 3.5.1
>            Reporter: Samba Shiva
>            Priority: Trivial
>
> I have been trying autoscaling in Kubernetes for Spark Jobs,When first job is 
> triggered based on load workers pods are scaling which is fine but When 
> second job is submitted its not getting allocating any resources as First Job 
> is consuming all the resources.
> Second job is in Waiting State until First Job is finished.I have gone 
> through documentation to set max cores in standalone mode which is not a 
> ideal solution as we are planning autoscaling based on load and Jobs 
> submitted.
> Is there any solution for this or any alternatives ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to