Github user jomach commented on the issue:

    https://github.com/apache/spark/pull/14644
  
    We have some servers running 8 GPUs on mesos. I would like to run Spark on 
it but I need to be able from spark to allocate a GPU only per map phase. On 
Hadoop 3.0 you can do spark.yarn.executor.resource.yarn.io/gpu. I have a Spark 
job that receives a list of files to process, each map on spark should call a c 
script that reads  a chunk of the list and process it on the gpu. For this I 
need that Spark recognizes the allocated gpu from Mesos like GPU0 is yours and 
of course mesos needs to mark that gpu as used. with this gpu.max this is not 
possible


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to