I have figured out the problem here. Turned out that there was a problem
with my SparkConf when I was running my application with yarn in cluster
mode. I was setting my master to be local[4] inside my application, whereas
I was setting it to yarn-cluster with spark-submit. Now I have changed my
For Spark streaming, you must always set *--executor-cores* to a value
which is = 2. Or else it will not do any processing.
Thanks
Best Regards
On Sat, Nov 22, 2014 at 8:39 AM, pankaj channe pankajc...@gmail.com wrote:
I have seen similar posts on this issue but could not find solution.
That doesn't seem to be the problem though. It processes but then stops.
Presumably there are many executors.
On Nov 22, 2014 9:40 AM, Akhil Das ak...@sigmoidanalytics.com wrote:
For Spark streaming, you must always set *--executor-cores* to a value
which is = 2. Or else it will not do any
Thanks Akhil for your input.
I have already tried with 3 executors and it still results into the same
problem. So as Sean mentioned, the problem does not seem to be related to
that.
On Sat, Nov 22, 2014 at 11:00 AM, Sean Owen so...@cloudera.com wrote:
That doesn't seem to be the problem
I have seen similar posts on this issue but could not find solution.
Apologies if this has been discussed here before.
I am running a spark streaming job with yarn on a 5 node cluster. I am
using following command to submit my streaming job.
spark-submit --class class_name --master yarn-cluster