No, unless I'm crazy you can't even change resource  requirements at the
job level let alone stage. Does it help you though? Is something else even
able to use the GPU otherwise?

On Sat, Jul 31, 2021, 3:56 AM Andreas Kunft <andreas.ku...@gmail.com> wrote:

> I have a setup with two work intensive tasks, one map using GPU followed
> by a map using only CPU.
>
> Using stage level resource scheduling, I request a GPU node, but would
> also like to execute the consecutive CPU map on a different executor so
> that the GPU node is not blocked.
>
> However, spark will always combine the two maps due to the narrow
> dependency, and thus, I can not define two different resource requirements.
>
> So the question is: can I force the two map functions on different
> executors without shuffling or even better is there a plan to enable this
> by assigning different resource requirements.
>
> Best
>

Reply via email to