Hi Todd,

There're two levels of locality based scheduling when you run Spark on Yarn
if dynamic allocation enabled:

1. Container allocation is based on the locality ratio of pending tasks,
this is Yarn specific and only works with dynamic allocation enabled.
2. Task scheduling is locality awared, this is same for different cluster
manager.

Thanks
Saisai

On Thu, Jan 28, 2016 at 10:50 AM, Todd <bit1...@163.com> wrote:

> Hi,
> I am kind of confused about how data locality is honored when  spark  is
> running on yarn(client or cluster mode),can someone please elaberate on
> this? Thanks!
>
>
>

Reply via email to