Just follow this documentation
http://spark.apache.org/docs/1.1.1/running-on-yarn.html

Ensure that *HADOOP_CONF_DIR* or *YARN_CONF_DIR* points to the directory
which contains the (client side) configuration files for the Hadoop
cluster. These configs are used to write to the dfs and connect to the YARN
ResourceManager.

Mostly you have wrong configuration in the environment and that's why its
connecting to the *localhost* (127.0.1.1)

Thanks
Best Regards

On Tue, Jan 6, 2015 at 8:10 PM, Sharon Rapoport <sha...@plaid.com> wrote:

> Hello,
>
> We have hadoop 2.6.0 and Yarn set up on ec2. Trying to get spark 1.1.1
> running on the Yarn cluster.
> I have of course googled around and found that this problem is solved for
> most after removing the line including 127.0.1.1 from /etc/hosts. This
> hasn’t seemed to solve this for me. Anyone has an idea where else might
> 127.0.1.1 be hiding in some conf? Looked everywhere… or is there a
> completely different problem?
>
> Thanks,
> Sharon
>
> I am getting this error:
>
> WARN network.SendingConnection: Error finishing connection to /
> 127.0.1.1:47020
> java.net.ConnectException: Connection refused
>
>

Reply via email to