Two late breaking questions:

This basically requires Hadoop 3.1 for YARN support?
Mesos support is listed as a non goal but it already has support for
requesting GPUs in Spark. That would be 'harmonized' with this
implementation even if it's not extended?

On Fri, Mar 1, 2019, 7:48 AM Xingbo Jiang <jiangxb1...@gmail.com> wrote:

> I think we are aligned on the commitment, I'll start a vote thread for
> this shortly.
>
> Xiangrui Meng <men...@gmail.com> 于2019年2月27日周三 上午6:47写道:
>
>> In case there are issues visiting Google doc, I attached PDF files to the
>> JIRA.
>>
>> On Tue, Feb 26, 2019 at 7:41 AM Xingbo Jiang <jiangxb1...@gmail.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> I want send a revised SPIP on implementing Accelerator(GPU)-aware
>>> Scheduling. It improves Spark by making it aware of GPUs exposed by cluster
>>> managers, and hence Spark can match GPU resources with user task requests
>>> properly. If you have scenarios that need to run workloads(DL/ML/Signal
>>> Processing etc.) on Spark cluster with GPU nodes, please help review and
>>> check how it fits into your use cases. Your feedback would be greatly
>>> appreciated!
>>>
>>> # Links to SPIP and Product doc:
>>>
>>> * Jira issue for the SPIP:
>>> https://issues.apache.org/jira/browse/SPARK-24615
>>> * Google Doc:
>>> https://docs.google.com/document/d/1C4J_BPOcSCJc58HL7JfHtIzHrjU0rLRdQM3y7ejil64/edit?usp=sharing
>>> * Product Doc:
>>> https://docs.google.com/document/d/12JjloksHCdslMXhdVZ3xY5l1Nde3HRhIrqvzGnK_bNE/edit?usp=sharing
>>>
>>> Thank you!
>>>
>>> Xingbo
>>>
>>

Reply via email to