Re: Spark shell never leaves ACCEPTED state in YARN CDH5

2015-03-25 Thread Marcelo Vanzin
The probably means there are not enough free resources in your cluster
to run the AM for the Spark job. Check your RM's web ui to see the
resources you have available.

On Wed, Mar 25, 2015 at 12:08 PM, Khandeshi, Ami
ami.khande...@fmr.com.invalid wrote:
 I am seeing the same behavior.  I have enough resources…..  How do I resolve
 it?



 Thanks,



 Ami



-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark shell never leaves ACCEPTED state in YARN CDH5

2015-03-25 Thread Khandeshi, Ami
I am seeing the same behavior.  I have enough resources.  How do I resolve 
it?

Thanks,

Ami


Re: Spark shell never leaves ACCEPTED state in YARN CDH5

2015-03-25 Thread Tobias Pfeiffer
Hi,

On Thu, Mar 26, 2015 at 4:08 AM, Khandeshi, Ami 
ami.khande...@fmr.com.invalid wrote:

 I am seeing the same behavior.  I have enough resources…..


CPU *and* memory are sufficient? No previous (unfinished) jobs eating them?

Tobias


Re: Spark shell never leaves ACCEPTED state in YARN CDH5

2015-03-25 Thread Dean Chen
We had a similar problem. Turned out that the Spark driver was binding to
the external IP of the CLI node Spark shell was running on, causing
executors to fail to connect to the driver.

The solution was to override export SPARK_LOCAL_IP=internal ip here in
spark-env.sh to the internal IP of the CLI node.


--
Dean Chen

On Wed, Mar 25, 2015 at 12:18 PM, Marcelo Vanzin van...@cloudera.com
wrote:

 The probably means there are not enough free resources in your cluster
 to run the AM for the Spark job. Check your RM's web ui to see the
 resources you have available.

 On Wed, Mar 25, 2015 at 12:08 PM, Khandeshi, Ami
 ami.khande...@fmr.com.invalid wrote:
  I am seeing the same behavior.  I have enough resources…..  How do I
 resolve
  it?
 
 
 
  Thanks,
 
 
 
  Ami



 --
 Marcelo

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org