Hi,

I am running long running application over yarn using spark and I am facing 
issues while using spark’s history server when the events are written to hdfs. 
It seems to work fine for some time and in between I see following exception.


2015-06-01 00:00:03,247 [SparkListenerBus] ERROR 
org.apache.spark.scheduler.LiveListenerBus - Listener EventLoggingListener 
threw an exception

java.lang.reflect.InvocationTargetException

        at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

        at java.lang.reflect.Method.invoke(Unknown Source)

        at 
org.apache.spark.util.FileLogger$$anonfun$flush$2.apply(FileLogger.scala:203)

        at 
org.apache.spark.util.FileLogger$$anonfun$flush$2.apply(FileLogger.scala:203)

        at scala.Option.foreach(Option.scala:236)

        at org.apache.spark.util.FileLogger.flush(FileLogger.scala:203)

        at 
org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:90)

        at 
org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:121)

        at 
org.apache.spark.scheduler.SparkListenerBus$$anonfun$postToAll$11.apply(SparkListenerBus.scala:66)

        at 
org.apache.spark.scheduler.SparkListenerBus$$anonfun$postToAll$11.apply(SparkListenerBus.scala:66)

        at 
org.apache.spark.scheduler.SparkListenerBus$$anonfun$foreachListener$1.apply(SparkListenerBus.scala:83)

        at 
org.apache.spark.scheduler.SparkListenerBus$$anonfun$foreachListener$1.apply(SparkListenerBus.scala:81)

        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

        at 
org.apache.spark.scheduler.SparkListenerBus$class.foreachListener(SparkListenerBus.scala:81)

        at 
org.apache.spark.scheduler.SparkListenerBus$class.postToAll(SparkListenerBus.scala:66)

        at 
org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:32)

        at 
org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:56)

        at 
org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:56)

        at scala.Option.foreach(Option.scala:236)

        at 
org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:56)

        at 
org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply(LiveListenerBus.scala:47)

        at 
org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply(LiveListenerBus.scala:47)

        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1545)

        at 
org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:46)

Caused by: java.io.IOException: All datanodes 192.168.162.54:50010 are bad. 
Aborting...

        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1128)

        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:924)

        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:486)



And after that this error continue to come and spark reaches into unstable 
stage where no job is able to progress.

FYI.
HDFS was up and running before and after this error and on restarting 
application it runs fine for some hours and again same error comes.
Enough disk space was available on each data node.

Any suggestion or help would be appreciated.

Regards
Pankaj

Reply via email to