Hi,

On Tue, Nov 11, 2014 at 2:04 PM, hmxxyy <hmx...@gmail.com> wrote:
>
> If I run bin/spark-shell without connecting a master, it can access a hdfs
> file on a remote cluster with kerberos authentication.

[...]

However, if I start the master and slave on the same host and using
> bin/spark-shell --master spark://*.*.*.*:7077
> run the same commands

[... ]
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "*.*.*.*.com/98.138.236.95"; destination host is: "*.*.*.*":8020;
>

When you give no master, it is "local[*]", so Spark will (implicitly?)
authenticate to HDFS from your local machine using local environment
variables, key files etc., I guess.

When you give a "spark://*" master, Spark will run on a different machine,
where you have not yet authenticated to HDFS, I think. I don't know how to
solve this, though, maybe some Kerberos token must be passed on to the
Spark cluster?

Tobias

Reply via email to