We had a similar problem. Turned out that the Spark driver was binding to
the external IP of the CLI node Spark shell was running on, causing
executors to fail to connect to the driver.

The solution was to override "export SPARK_LOCAL_IP=<internal ip here>" in
spark-env.sh to the internal IP of the CLI node.


--
Dean Chen

On Wed, Mar 25, 2015 at 12:18 PM, Marcelo Vanzin <van...@cloudera.com>
wrote:

> The probably means there are not enough free resources in your cluster
> to run the AM for the Spark job. Check your RM's web ui to see the
> resources you have available.
>
> On Wed, Mar 25, 2015 at 12:08 PM, Khandeshi, Ami
> <ami.khande...@fmr.com.invalid> wrote:
> > I am seeing the same behavior.  I have enough resources…..  How do I
> resolve
> > it?
> >
> >
> >
> > Thanks,
> >
> >
> >
> > Ami
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to