Thank you. HADOOP_CONF_DIR has been missing.
On Wed, Sep 24, 2014 at 4:48 PM, Matt Narrell matt.narr...@gmail.com
wrote:
Yes, this works. Make sure you have HADOOP_CONF_DIR set on your Spark
machines
mn
On Sep 24, 2014, at 5:35 AM, Petr Novak oss.mli...@gmail.com wrote:
Hello,
if our
Hello,
if our Hadoop cluster is configured with HA and fs.defaultFS points to a
namespace instead of a namenode hostname - hdfs://namespace_name/ - then
our Spark job fails with exception. Is there anything to configure or it is
not implemented?
Exception in thread main
Yes, this works. Make sure you have HADOOP_CONF_DIR set on your Spark machines
mn
On Sep 24, 2014, at 5:35 AM, Petr Novak oss.mli...@gmail.com wrote:
Hello,
if our Hadoop cluster is configured with HA and fs.defaultFS points to a
namespace instead of a namenode hostname -