[
https://issues.apache.org/jira/browse/SPARK-48673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dongjoon Hyun updated SPARK-48673:
----------------------------------
Component/s: (was: k8s)
> Scheduling Across Applications in k8s mode
> -------------------------------------------
>
> Key: SPARK-48673
> URL: https://issues.apache.org/jira/browse/SPARK-48673
> Project: Spark
> Issue Type: Question
> Components: Kubernetes, Scheduler, Spark Shell, Spark Submit
> Affects Versions: 3.5.1
> Reporter: Samba Shiva
> Priority: Trivial
>
> I have been trying autoscaling in Kubernetes for Spark Jobs,When first job is
> triggered based on load workers pods are scaling which is fine but When
> second job is submitted its not getting allocating any resources as First Job
> is consuming all the resources.
> Second job is in Waiting State until First Job is finished.I have gone
> through documentation to set max cores in standalone mode which is not a
> ideal solution as we are planning autoscaling based on load and Jobs
> submitted.
> Is there any solution for this or any alternatives ?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]