Hi,
On Tue, Nov 11, 2014 at 2:04 PM, hmxxyy hmx...@gmail.com wrote:
If I run bin/spark-shell without connecting a master, it can access a hdfs
file on a remote cluster with kerberos authentication.
[...]
However, if I start the master and slave on the same host and using
bin/spark-shell
both the active and
standby name nodes does not work as this actually causes an error. This
means that when my active name node fails over, my spark configuration
becomes invalid.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Strange-behavior-of-spark-shell
Kerberos token must be passed on to the
Spark cluster?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Strange-behavior-of-spark-shell-while-accessing-hdfs-tp18549p18658.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Thanks guys for the info.
I have to use yarn to access a kerberos cluster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Strange-behavior-of-spark-shell-while-accessing-hdfs-tp18549p18677.html
Sent from the Apache Spark User List mailing list archive
pulling all
hair out...
Thanks so much.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Strange-behavior-of-spark-shell-while-accessing-hdfs-tp18549.html
Sent from the Apache Spark User List mailing list archive at Nabble.com