Re: [Spark Core, PySpark] Separate stage level scheduling for consecutive map functions

2021-07-31 Thread Sean Owen
No, unless I'm crazy you can't even change resource  requirements at the
job level let alone stage. Does it help you though? Is something else even
able to use the GPU otherwise?

On Sat, Jul 31, 2021, 3:56 AM Andreas Kunft  wrote:

> I have a setup with two work intensive tasks, one map using GPU followed
> by a map using only CPU.
>
> Using stage level resource scheduling, I request a GPU node, but would
> also like to execute the consecutive CPU map on a different executor so
> that the GPU node is not blocked.
>
> However, spark will always combine the two maps due to the narrow
> dependency, and thus, I can not define two different resource requirements.
>
> So the question is: can I force the two map functions on different
> executors without shuffling or even better is there a plan to enable this
> by assigning different resource requirements.
>
> Best
>


[Spark Core, PySpark] Separate stage level scheduling for consecutive map functions

2021-07-31 Thread Andreas Kunft
I have a setup with two work intensive tasks, one map using GPU followed by
a map using only CPU.

Using stage level resource scheduling, I request a GPU node, but would also
like to execute the consecutive CPU map on a different executor so that the
GPU node is not blocked.

However, spark will always combine the two maps due to the narrow
dependency, and thus, I can not define two different resource requirements.

So the question is: can I force the two map functions on different
executors without shuffling or even better is there a plan to enable this
by assigning different resource requirements.

Best


Re: Running Spark Rapids on GPU-Powered Spark Cluster

2021-07-31 Thread Gourav Sengupta
Hi Artemis,

please do not insult people here, and give your personal opinions as well.
Your comments are insulting to all big corporations which pay salaries and
provide platforms for a lot of people here.

Best of luck with your endeavors.

Regards,
Gourav Sengupta