Samba Shiva created SPARK-48673:
-----------------------------------

             Summary: Scheduling Across Applications in k8s mode 
                 Key: SPARK-48673
                 URL: https://issues.apache.org/jira/browse/SPARK-48673
             Project: Spark
          Issue Type: New Feature
          Components: k8s, Kubernetes, Scheduler, Spark Shell, Spark Submit
    Affects Versions: 3.5.1
            Reporter: Samba Shiva


I have been trying autoscaling in Kubernetes for Spark Jobs,When first job is 
triggered based on load workers pods are scaling which is fine but When second 
job is submitted its not getting allocating any resources as First Job is 
consuming all the resources.

Second job is in Waiting State until First Job is finished.I have gone through 
documentation to set max cores in standalone mode which is not a ideal solution 
as we are planning autoscaling based on load and Jobs submitted.

Is there any solution for this or any alternatives ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to