[ 
https://issues.apache.org/jira/browse/SPARK-20466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171009#comment-16171009
 ] 

liyunzhang_intel commented on SPARK-20466:
------------------------------------------

[~stakiar]:  
this exception happened on which query of tpcds?  I found in another benchmark 
test(TPCx-BB)
{quote}
This cache uses soft references, so the JVM may reclaim entries from the map 
whenever there is some GC pressure. In which case, any get request on the key 
will return a null. The race condition is that the #getJobConf method first 
checks if the cache contains the key, and then retrieves. In between the 
containsKey and get its possible the the key is GCed by the JVM. 
{quote}
this exception is because {{HadoopRDD.containsCachedMetadata(jobConfCacheKey)}} 
returns soft reference and it will return {{null}} when GC happens? If it 
changes to 
{code}
 else if ( HadoopRDD.getCachedMetadata(jobConfCacheKey) != null) {
        logDebug("Re-using cached JobConf")
        HadoopRDD.getCachedMetadata(jobConfCacheKey).asInstanceOf[JobConf]
      }
{code}
 HadoopRDD.getCachedMetadata(jobConfCacheKey) will not return null if GC 
happens?





> HadoopRDD#addLocalConfiguration throws NPE
> ------------------------------------------
>
>                 Key: SPARK-20466
>                 URL: https://issues.apache.org/jira/browse/SPARK-20466
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 2.0.2
>            Reporter: liyunzhang_intel
>            Priority: Minor
>         Attachments: NPE_log
>
>
> in spark2.0.2, it throws NPE
> {code}
>   17/04/23 08:19:55 ERROR executor.Executor: Exception in task 439.0 in stage 
> 16.0 (TID 986)$ 
> java.lang.NullPointerException$
> ^Iat 
> org.apache.spark.rdd.HadoopRDD$.addLocalConfiguration(HadoopRDD.scala:373)$
> ^Iat org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:243)$
> ^Iat org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)$
> ^Iat org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)$
> ^Iat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)$
> ^Iat org.apache.spark.rdd.RDD.iterator(RDD.scala:283)$
> ^Iat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)$
> ^Iat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)$
> ^Iat org.apache.spark.rdd.RDD.iterator(RDD.scala:283)$
> ^Iat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)$
> ^Iat org.apache.spark.scheduler.Task.run(Task.scala:86)$
> ^Iat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)$
> ^Iat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)$
> ^Iat 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)$
> ^Iat java.lang.Thread.run(Thread.java:745)$
> {code}
> suggestion to add some code to avoid NPE
> {code} 
>    /** Add Hadoop configuration specific to a single partition and attempt. */
>   def addLocalConfiguration(jobTrackerId: String, jobId: Int, splitId: Int, 
> attemptId: Int,
>                             conf: JobConf) {
>     val jobID = new JobID(jobTrackerId, jobId)
>     val taId = new TaskAttemptID(new TaskID(jobID, TaskType.MAP, splitId), 
> attemptId)
>     if ( conf != null){
>     conf.set("mapred.tip.id", taId.getTaskID.toString)
>     conf.set("mapred.task.id", taId.toString)
>     conf.setBoolean("mapred.task.is.map", true)
>     conf.setInt("mapred.task.partition", splitId)
>     conf.set("mapred.job.id", jobID.toString)
>    }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to