I have verified that this error exists on my system as well, and the suggested 
workaround also works.
Spark version: 1.5.1; 1.5.2
Mesos version: 0.21.1
CDH version: 4.7

I have set up the spark-env.sh to contain HADOOP_CONF_DIR pointing to the 
correct place, and I have also linked in the hdfs-site.xml file to 
$SPARK_HOME/conf.  I agree that it should work, but it doesn't.

I have also tried including the correct Hadoop configuration files in the 
application jar.

Note: it works fine from spark-shell, but it doesn't work from spark-submit

Dave

-----Original Message-----
From: Marcelo Vanzin [mailto:van...@cloudera.com] 
Sent: Tuesday, September 15, 2015 7:47 PM
To: Adrian Bridgett
Cc: user
Subject: Re: hdfs-ha on mesos - odd bug

On Mon, Sep 14, 2015 at 6:55 AM, Adrian Bridgett <adr...@opensignal.com> wrote:
> 15/09/14 13:00:25 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 
> 0,
> 10.1.200.245): java.lang.IllegalArgumentException:
> java.net.UnknownHostException: nameservice1
>     at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>     at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxie
> s.java:310)

This looks like you're trying to connect to an HA HDFS service but you have not 
provided the proper hdfs-site.xml for your app; then, instead of recognizing 
"nameservice1" as an HA nameservice, it thinks it's an actual NN address, tries 
to connect to it, and fails.

If you provide the correct hdfs-site.xml to your app (by placing it in 
$SPARK_HOME/conf or setting HADOOP_CONF_DIR to point to the conf directory), it 
should work.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional 
commands, e-mail: user-h...@spark.apache.org

Reply via email to