20.4 KB / 213
>
i tried spark.locality.wait=1s. It helped in some extend but not
completely. I still see some tasks (~40% time) getting scheduled on same
executor (8 task at once ) even after 2 sec elapsed.
*Can someone clarify why it is working like this? I asked it on
stack-overflow
(http://st
Any update on this guys ?
On Wed, Dec 28, 2016 at 10:19 AM, Nishant Kumar <nishant.ku...@applift.com>
wrote:
> I have updated my question:
>
> http://stackoverflow.com/questions/41345552/spark-
> streaming-with-yarn-executors-not-fully-utilized
>
> On Wed, Dec 28, 2016 a
I have updated my question:
http://stackoverflow.com/questions/41345552/spark-streaming-with-yarn-executors-not-fully-utilized
On Wed, Dec 28, 2016 at 9:49 AM, Nishant Kumar <nishant.ku...@applift.com>
wrote:
> Hi,
>
> I am running spark streaming with Yarn with -
>
> *
Hi,
I am running spark streaming with Yarn with -
*spark-submit --master yarn --deploy-mode cluster --num-executors 2
--executor-memory 8g --driver-memory 2g --executor-cores 8 ..*
I am consuming Kafka through DireactStream approach (No receiver). I have 2
topics (each with 3 partitions).
I