Github user MaxGekk commented on the issue:

    https://github.com/apache/spark/pull/21589
  
    > AFAIK, we always have num of executor ...
    
    Not in all cases, Databricks clients can create auto-scaling clusters: 
https://docs.databricks.com/user-guide/clusters/sizing.html#cluster-size-and-autoscaling
 . For such cluster, we cannot get size of cluster  in term of cores via config 
parameters. We need methods that could return current state of a cluster. Any 
static configs don't work here because it leads to overloaded or underloaded 
clusters. 
    
    > ...  and then num of core per executor right?
    
    In general, number of cores per executor could be different. I don't think 
it is good idea to force user to perform complex calculation to get number of 
cores available in a cluster. 
    
    > maybe we should have the getter factored the same way and probably named 
and described/documented similarly
    
    @felixcheung I am not sure that our users are so interested in getting a 
list of cores per executors and calculate total numbers cores by summurizing 
the list. It will just complicate API and implementation, from my point of view.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to