SPARK-5869 appears to have the same exception and is fixed in 1.3.0. I
double checked the CDH package to see if it had the patch

https://github.com/cloudera/spark/blob/cdh5.4.4-release/core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala#L161

In my case, my yarn application fails after submission and a block manager
is not registered. This causes a NPE while cleaning up the folder.

On Mon, Aug 31, 2015 at 1:48 PM Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> Looks like you are hitting this
> https://issues.apache.org/jira/browse/SPARK-5869 try to update your spark
> version
>
> Thanks
> Best Regards
>
> On Tue, Sep 1, 2015 at 12:09 AM, nasokan <anithi...@gmail.com> wrote:
>
>> I'm currently using Spark 1.3.0 on yarn cluster deployed through CDH5.4.
>> My
>> cluster does not have a 'default' queue, and launching 'spark-shell'
>> submits
>> an yarn application that gets killed immediately because queue does not
>> exist. However, the spark-shell session is still in progress after
>> throwing
>> a bunch of errors while creating sql context. Upon submitting an 'exit'
>> command, there appears to be a NPE from DiskBlockManager with the
>> following
>> stack trace
>>
>> ERROR Utils: Uncaught exception in thread delete Spark local dirs
>> java.lang.NullPointerException
>>         at
>> org.apache.spark.storage.DiskBlockManager.org
>> $apache$spark$storage$DiskBlockManager$$doStop(DiskBlockManager.scala:161)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply$mcV$sp(DiskBlockManager.scala:141)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
>>         at
>> org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1.run(DiskBlockManager.scala:139)
>> Exception in thread "delete Spark local dirs"
>> java.lang.NullPointerException
>>         at
>> org.apache.spark.storage.DiskBlockManager.org
>> $apache$spark$storage$DiskBlockManager$$doStop(DiskBlockManager.scala:161)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply$mcV$sp(DiskBlockManager.scala:141)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1$$anonfun$run$1.apply(DiskBlockManager.scala:139)
>>         at
>> org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
>>         at
>>
>> org.apache.spark.storage.DiskBlockManager$$anon$1.run(DiskBlockManager.scala:139)
>>
>> I believe the problem appears to be surfacing from a shutdown hook that's
>> tries to cleanup local directories. In this specific case because the yarn
>> application was not submitted successfully, the block manager was not
>> registered; as a result it does not have a valid blockManagerId as seen
>> here
>>
>>
>> https://github.com/apache/spark/blob/v1.3.0/core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala#L161
>>
>> Has anyone faced this issue before? Could this be a problem with the way
>> shutdown hook behaves currently?
>>
>> Note: I referenced source from apache spark repo than cloudera.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Potential-NPE-while-exiting-spark-shell-tp24523.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to