Hello,
if our Hadoop cluster is configured with HA and "fs.defaultFS" points to a
namespace instead of a namenode hostname - hdfs://<namespace_name>/ - then
our Spark job fails with exception. Is there anything to configure or it is
not implemented?


Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure:
Lost task 0.3 in stage 1.0 (TID 4, <hostanme>):


java.lang.IllegalArgumentException: java.net.UnknownHostException:
*<namespace_name>*


Many thanks,
P.

Reply via email to