Thomas Graves created SPARK-30448:
-------------------------------------

             Summary: accelerator aware scheduling enforce cores as limiting 
resource
                 Key: SPARK-30448
                 URL: https://issues.apache.org/jira/browse/SPARK-30448
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 3.0.0
            Reporter: Thomas Graves


For the first version of accelerator aware scheduling(SPARK-27495), the SPIP 
had a condition that we can support dynamic allocation because we were going to 
have a strict requirement that we don't waste any resources. This means that 
the number of number of slots each executor has could be calculated from the 
number of cores and task cpus just as is done today.

Somewhere along the line of development we relaxed that and only warn when we 
are wasting resources. This breaks the dynamic allocation logic if the limiting 
resource is no longer the cores.  This means we will request less executors 
then we really need to run everything.

We have to enforce that cores is always the limiting resource so we should 
throw if its not.

I guess we could only make this a requirement with dynamic allocation on, but 
to make the behavior consistent I would say we just require it across the board.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to