[ https://issues.apache.org/jira/browse/SPARK-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14577371#comment-14577371 ]
Dev Lakhani commented on SPARK-8142: ------------------------------------ Update: having resolved some dependencies issues the current state is this: hadoop-common 2.6.0 - provided hadoop-client 2.6.0 provided hadoop -hdfs 2.6.0 provided spark-sql_s.10 provided spark-core_2.10 provided hbase-client 1.1.0 included.packaged hbase -protocol 1.1.0 included/packaged hbase -server 1.1.0 included/packaged I run the job and run into this: https://issues.apache.org/jira/browse/SPARK-1867 which suggests a class is missing, how do I find which one? There is ClassNotFoundException exception but something might be missing, how can I find this out? > Spark Job Fails with ResultTask ClassCastException > -------------------------------------------------- > > Key: SPARK-8142 > URL: https://issues.apache.org/jira/browse/SPARK-8142 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.3.1 > Reporter: Dev Lakhani > > When running a Spark Job, I get no failures in the application code > whatsoever but a weird ResultTask Class exception. In my job, I create a RDD > from HBase and for each partition do a REST call on an API, using a REST > client. This has worked in IntelliJ but when I deploy to a cluster using > spark-submit.sh I get : > org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in > stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 > (TID 3, host): java.lang.ClassCastException: > org.apache.spark.scheduler.ResultTask cannot be cast to > org.apache.spark.scheduler.Task > at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > These are the configs I set to override the spark classpath because I want to > use my own glassfish jersey version: > > sparkConf.set("spark.driver.userClassPathFirst","true"); > sparkConf.set("spark.executor.userClassPathFirst","true"); > I see no other warnings or errors in any of the logs. > Unfortunately I cannot post my code, but please ask me questions that will > help debug the issue. Using spark 1.3.1 hadoop 2.6. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org