Hi,
On Tue, Nov 11, 2014 at 2:04 PM, hmxxyy hmx...@gmail.com wrote:
If I run bin/spark-shell without connecting a master, it can access a hdfs
file on a remote cluster with kerberos authentication.
[...]
However, if I start the master and slave on the same host and using
bin/spark-shell
You need to set the spark configuration property: spark.yarn.access.namenodes
to your namenode.
e.g. spark.yarn.access.namenodes=hdfs://mynamenode:8020
Similarly, I'm curious if you're also running high availability HDFS with an
HA nameservice.
I currently have HA HDFS and kerberos and I've
Only YARN mode is supported with kerberos. You can't use a spark:// master
with kerberos.
Tobias Pfeiffer wrote
When you give a spark://* master, Spark will run on a different machine,
where you have not yet authenticated to HDFS, I think. I don't know how to
solve this, though, maybe some
Thanks guys for the info.
I have to use yarn to access a kerberos cluster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Strange-behavior-of-spark-shell-while-accessing-hdfs-tp18549p18677.html
Sent from the Apache Spark User List mailing list archive at