Github user markhamstra commented on the issue:

    https://github.com/apache/spark/pull/21589
  
    It is precisely because the audience that I am concerned with is not 
limited to just data scientists or notebook users and their particular needs 
that I am far from convinced that exposing internals of the Spark scheduler in 
the public API is a good idea.
    
    There are many ways that a higher-level declaration could be made. I'm not 
committed to any particular model at this point. The way that it is done for 
scheduling pools via `sc.setLocalProperty` is one way that Job execution can be 
put into a particular declarative context. That's not necessarily the best way 
to do it, but it isn't necessarily more difficult than figuring out correct 
imperative code after fetching a snapshot of the number of available cores at 
some point.
    
    Doing this the right way likely requires an appropriate SPIP, not just a 
quick hack PR.
    
    A spark-package would be another way to expose additional functionality 
without it needing to be bound into the Spark public API.  


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to