Hi,

Can you check if the RDD is partitioned correctly with correct partition
number (if you are manually setting the partition value.) . Try using Hash
partitioner while reading the files.

One way you can debug is by checking the number of records that executor
has compared to others in the Stage tab of the Spark UI.

Kuchekar, Nilesh

On Tue, Jul 19, 2016 at 8:16 PM, Aaron Jackson <ajack...@pobox.com> wrote:

> Hi,
>
> I have a cluster with 15 nodes of which 5 are HDFS nodes.  I kick off a
> job that creates some 120 stages.  Eventually, the active and pending
> stages reduce down to a small bottleneck and it never fails... the tasks
> associated with the 10 (or so) running tasks are always allocated to the
> same executor on the same host.
>
> Sooner or later, it runs out of memory ... or some other resource.  It
> falls over and then they tasks are reallocated to another executor.
>
> Why do we see such heavy concentration of tasks onto a single executor
> when other executors are free?  Were the tasks assigned to an executor when
> the job was decomposed into stages?
>

Reply via email to