It sounds like your job has 9 tasks and all are executing simultaneously in
parallel. This is as good as it gets right? Are you asking how to break the
work into more tasks, like 120 to match your 10*12 cores? Make your RDD
have more partitions. For example the textFile method can override the
default number of partitions determined by HDFS splits.
On Jun 17, 2014 5:37 PM, "abhiguruvayya" <sharath.abhis...@gmail.com> wrote:

> I am creating around 10 executors with 12 cores and 7g memory, but when i
> launch a task not all executors are being used. For example if my job has 9
> tasks, only 3 executors are being used with 3 task each and i believe this
> is making my app slower than map reduce program for the same use case. Can
> any one throw some light on executor configuration if any?How can i use all
> the executors. I am running spark on yarn and Hadoop 2.4.0.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to