I have a Spark application that runs perfectly in local mode with 8 threads,
but when deployed on a single-node cluster. It gives the following error:

ROR TaskSchedulerImpl: Lost executor 0 on 192.168.42.202: Uncaught exception
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
14/06/21 04:18:53 ERROR TaskSetManager: Task 2.0:0 failed 3 times; aborting
job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 2.0:0 failed 3 times, most recent failure: Exception
failure in TID 7 on host 192.168.42.202: java.lang.NoSuchFieldError:
INSTANCE
        org.apache.http.entity.ContentType.parse(ContentType.java:229)
...

This is weird as this error is supposed to be caught by compiler but not jvm
(unless Spark has changed the content of a class internally, which is
impossible because the class is in the uber-jar but not closure). Also, I
can confirm that the class that contains INSTANCE as a property is in the
uber jar, so there is really no reason for Spark to throw it.

Here is another independent question: I've also encounter several errors
that only appears in cluster mode, they are hard to fix because I cannot
debug them. Is there a local cluster simulation mode that can throw all
errors yet allows me to debug?

Yours Peng



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-throws-NoSuchFieldError-when-testing-on-cluster-mode-tp8064.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to