Dev Lakhani created SPARK-8142: ---------------------------------- Summary: Spark Job Fails with ResultTask Class Exception Key: SPARK-8142 URL: https://issues.apache.org/jira/browse/SPARK-8142 Project: Spark Issue Type: Bug Affects Versions: 1.3.1 Reporter: Dev Lakhani
When running a Spark Job, I get no failures in the application code whatsoever but a weird ResultTask Class exception. In my job I run create an RDD from HBase and for each partition do a REST call on an API, using a REST client. This has worked in IntelliJ but when I deploy to a cluster using spark-submit.sh I get : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, host): java.lang.ClassCastException: org.apache.spark.scheduler.ResultTask cannot be cast to org.apache.spark.scheduler.Task at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) These are the configs I set to override the spark classpath because I want to use my own glassfish jersey version: sparkConf.set("spark.driver.userClassPathFirst","true"); sparkConf.set("spark.executor.userClassPathFirst","true"); I see no other warnings or errors in any of the logs. Unfortunately I cannot post my code, but please ask me questions that will help debug the issue. Using spark 1.3 hadoop 2.6. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org