The probably means there are not enough free resources in your cluster
to run the AM for the Spark job. Check your RM's web ui to see the
resources you have available.
On Wed, Mar 25, 2015 at 12:08 PM, Khandeshi, Ami
ami.khande...@fmr.com.invalid wrote:
I am seeing the same behavior. I have
I am seeing the same behavior. I have enough resources. How do I resolve
it?
Thanks,
Ami
Hi,
On Thu, Mar 26, 2015 at 4:08 AM, Khandeshi, Ami
ami.khande...@fmr.com.invalid wrote:
I am seeing the same behavior. I have enough resources…..
CPU *and* memory are sufficient? No previous (unfinished) jobs eating them?
Tobias
We had a similar problem. Turned out that the Spark driver was binding to
the external IP of the CLI node Spark shell was running on, causing
executors to fail to connect to the driver.
The solution was to override export SPARK_LOCAL_IP=internal ip here in
spark-env.sh to the internal IP of the