Hi All,

We are using shark table to dump the data, we are getting the following
error :

Exception in thread "main" org.apache.spark.SparkException: Job aborted:
Task 1.0:0 failed 1 times (most recent failure: Exception failure:
java.io.FileNotFoundException: http://<IP>/broadcast_1)

We dont know where the error is coming from, can anyone please explain me
the casue of this error and how to handle it. The spark.cleaner.ttl is set
to 4600, which i guess is more than enough to run the application.
Spark Version : 0.9.0-incubating
Shark : 0.9.0 - SNAPSHOT
Scala : 2.10.3

Thank You
Honey Joshi
Ideata Analytics



Reply via email to