liyunzhang_intel created SPARK-20466:
----------------------------------------

             Summary: HadoopRDD#addLocalConfiguration throws NPE
                 Key: SPARK-20466
                 URL: https://issues.apache.org/jira/browse/SPARK-20466
             Project: Spark
          Issue Type: Bug
          Components: YARN
    Affects Versions: 2.0.2
            Reporter: liyunzhang_intel


in spark2.0.2, it throws NPE
{code}
  17/04/23 08:19:55 ERROR executor.Executor: Exception in task 439.0 in stage 
16.0 (TID 986)$ 
java.lang.NullPointerException$
^Iat org.apache.spark.rdd.HadoopRDD$.addLocalConfiguration(HadoopRDD.scala:373)$
^Iat org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:243)$
^Iat org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)$
^Iat org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)$
^Iat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)$
^Iat org.apache.spark.rdd.RDD.iterator(RDD.scala:283)$
^Iat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)$
^Iat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)$
^Iat org.apache.spark.rdd.RDD.iterator(RDD.scala:283)$
^Iat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)$
^Iat org.apache.spark.scheduler.Task.run(Task.scala:86)$
^Iat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)$
^Iat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)$
^Iat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)$
^Iat java.lang.Thread.run(Thread.java:745)$
{code}

suggestion to add some code to avoid NPE

{code} 

   /** Add Hadoop configuration specific to a single partition and attempt. */
  def addLocalConfiguration(jobTrackerId: String, jobId: Int, splitId: Int, 
attemptId: Int,
                            conf: JobConf) {
    val jobID = new JobID(jobTrackerId, jobId)
    val taId = new TaskAttemptID(new TaskID(jobID, TaskType.MAP, splitId), 
attemptId)
    if ( conf != null){
    conf.set("mapred.tip.id", taId.getTaskID.toString)
    conf.set("mapred.task.id", taId.toString)
    conf.setBoolean("mapred.task.is.map", true)
    conf.setInt("mapred.task.partition", splitId)
    conf.set("mapred.job.id", jobID.toString)
   }
  }


{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to