Could you share your log?

On Wed, Jan 20, 2016 at 5:37 AM, Siddharth Ubale <
siddharth.ub...@syncoms.com> wrote:

>
>
> Hi,
>
>
>
> I am running a Spark Job on the yarn cluster.
>
> The spark job is a spark streaming application which is reading JSON from
> a kafka topic , inserting the JSON values to hbase tables via Phoenix ,
> ands then sending out certain messages to a websocket if the JSON satisfies
> a certain criteria.
>
>
>
> My cluster is a 3 node cluster with 24GB ram and 24 cores in total.
>
>
>
> Now :
>
> 1. when I am submitting the job with 10GB memory, the application fails
> saying memory is insufficient to run the job
>
> 2. The job is submitted with 6G ram. However, it does not run successfully
> always.Common issues faced :
>
>                 a. Container exited with a non-zero exit code 1 , and
> after multiple such warning the job is finished.
>
>                 d. The failed job notifies that it was unable to find a
> file in HDFS which is something like _*hadoop_conf*_xxxxxx.zip
>
>
>
> Can someone pls let me know why am I seeing the above 2 issues.
>
>
>
> Thanks,
>
> Siddharth Ubale,
>
>
>

Reply via email to