yes , when i see my yarn logs for that particular failed app_id, i got the
following error.
ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting
for 10 ms. Please check earlier log output for errors. Failing the
application
For this error, I need to change the
I'm getting some spark exception. Please look this log trace (
*http://pastebin.com/xL9jaRUa
http://pastebin.com/xL9jaRUa* ).
*Thanks*,
https://in.linkedin.com/in/ramkumarcs31
On Wed, Aug 19, 2015 at 10:20 PM, Hari Shreedharan
hshreedha...@cloudera.com wrote:
It looks like you are having
Thanks a lot for your suggestion. I had modified HADOOP_CONF_DIR in
spark-env.sh so that core-site.xml is under HADOOP_CONF_DIR. i can able to
see the logs like that you had shown above. Now i can able to run for 3
minutes and store results between every minutes. After sometimes, there is
an
We are using Cloudera-5.3.1. since it is one of the earlier version of CDH,
it doesnt supports the latest version of spark. So i installed spark-1.4.1
separately in my machine. I couldnt able to do spark-submit in cluster
mode. How to core-site.xml under classpath ? it will be very helpful if you
Hi,
I have a cluster of 1 master and 2 slaves. I'm running a spark streaming in
master and I want to utilize all nodes in my cluster. i had specified some
parameters like driver memory and executor memory in my code. when i
give --deploy-mode cluster --master yarn-cluster in my spark-submit, it
Yes. this file is available in this path in the same machine where i'm
running the spark. later i moved spark-1.4.1 folder to all other machines
in my cluster but still i'm facing the same issue.
*Thanks*,
https://in.linkedin.com/in/ramkumarcs31
On Thu, Aug 13, 2015 at 1:17 PM, Akhil Das