Seems like there might be a mismatch between your Spark jars and your cluster's HDFS version. Make sure you're using the Spark jar that matches the hadoop version of your cluster.
On Thu, May 21, 2015 at 8:48 AM, roy <rp...@njit.edu> wrote: > Hi, > > After restarting Spark HistoryServer, it failed to come up, I checked > logs for Spark HistoryServer found following messages :' > > 2015-05-21 11:38:03,790 WARN org.apache.spark.scheduler.ReplayListenerBus: > Log path provided contains no log files. > 2015-05-21 11:38:52,319 INFO org.apache.spark.deploy.history.HistoryServer: > Registered signal handlers for [TERM, HUP, INT] > 2015-05-21 11:38:52,328 WARN > org.apache.spark.deploy.history.HistoryServerArguments: Setting log > directory through the command line is deprecated as of Spark 1.1.0. Please > set this through spark.history.fs.logDirectory instead. > 2015-05-21 11:38:52,461 INFO org.apache.spark.SecurityManager: Changing > view > acls to: spark > 2015-05-21 11:38:52,462 INFO org.apache.spark.SecurityManager: Changing > modify acls to: spark > 2015-05-21 11:38:52,463 INFO org.apache.spark.SecurityManager: > SecurityManager: authentication disabled; ui acls disabled; users with view > permissions: Set(spark); users with modify permissions: Set(spark) > 2015-05-21 11:41:24,893 ERROR > org.apache.spark.deploy.history.HistoryServer: > RECEIVED SIGNAL 15: SIGTERM > 2015-05-21 11:41:33,439 INFO org.apache.spark.deploy.history.HistoryServer: > Registered signal handlers for [TERM, HUP, INT] > 2015-05-21 11:41:33,447 WARN > org.apache.spark.deploy.history.HistoryServerArguments: Setting log > directory through the command line is deprecated as of Spark 1.1.0. Please > set this through spark.history.fs.logDirectory instead. > 2015-05-21 11:41:33,578 INFO org.apache.spark.SecurityManager: Changing > view > acls to: spark > 2015-05-21 11:41:33,579 INFO org.apache.spark.SecurityManager: Changing > modify acls to: spark > 2015-05-21 11:41:33,579 INFO org.apache.spark.SecurityManager: > SecurityManager: authentication disabled; ui acls disabled; users with view > permissions: Set(spark); users with modify permissions: Set(spark) > 2015-05-21 11:44:07,147 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O > error constructing remote block reader. > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) > at > > org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) > at > > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) > at > org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) > at java.io.InputStreamReader.read(InputStreamReader.java:184) > at java.io.BufferedReader.fill(BufferedReader.java:154) > at java.io.BufferedReader.readLine(BufferedReader.java:317) > at java.io.BufferedReader.readLine(BufferedReader.java:382) > at > > scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > > org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) > at > scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) > at > > org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) > at > > org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > 2015-05-21 11:44:07,151 WARN org.apache.hadoop.hdfs.DFSClient: Failed to > connect to /10.1.1.253:50010 for block, add to deadNodes and continue. > java.io.EOFException: Premature EOF: no length prefix available > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) > at > > org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) > at > > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) > at > org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) > at java.io.InputStreamReader.read(InputStreamReader.java:184) > at java.io.BufferedReader.fill(BufferedReader.java:154) > at java.io.BufferedReader.readLine(BufferedReader.java:317) > at java.io.BufferedReader.readLine(BufferedReader.java:382) > at > > scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > > org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) > at > scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) > at > > org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) > at > > org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > 2015-05-21 11:44:07,161 INFO org.apache.hadoop.hdfs.DFSClient: Successfully > connected to /10.1.1.190:50010 for > BP-1877157801-10.1.1.42-1366756660926:blk_1104141398_1099642456200 > 2015-05-21 11:44:19,946 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O > error constructing remote block reader. > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) > at > > org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) > at > > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) > at > org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) > at java.io.InputStreamReader.read(InputStreamReader.java:184) > at java.io.BufferedReader.fill(BufferedReader.java:154) > at java.io.BufferedReader.readLine(BufferedReader.java:317) > at java.io.BufferedReader.readLine(BufferedReader.java:382) > at > > scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > > org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) > at > scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) > at > > org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) > at > > org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > 2015-05-21 11:44:19,947 WARN org.apache.hadoop.hdfs.DFSClient: Failed to > connect to /10.1.1.253:50010 for block, add to deadNodes and continue. > java.io.EOFException: Premature EOF: no length prefix available > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) > at > > org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) > at > > org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) > at > > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) > at > org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) > at java.io.InputStreamReader.read(InputStreamReader.java:184) > at java.io.BufferedReader.fill(BufferedReader.java:154) > at java.io.BufferedReader.readLine(BufferedReader.java:317) > at java.io.BufferedReader.readLine(BufferedReader.java:382) > at > > scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) > at > > org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > > org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) > at > scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) > at > > org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) > at > > org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > 2015-05-21 11:44:19,950 INFO org.apache.hadoop.hdfs.DFSClient: Successfully > connected to /10.1.1.35:50010 for > BP-1877157801-10.1.1.42-1366756660926:blk_1104192564_1099642507371 > > > > Also > > > 2015-05-21 11:38:03,789 WARN org.apache.spark.scheduler.ReplayListenerBus: > Log path provided contains no log files. > 2015-05-21 11:38:03,789 WARN org.apache.spark.scheduler.ReplayListenerBus: > Log path provided contains no log files. > 2015-05-21 11:38:03,789 ERROR > org.apache.spark.deploy.history.FsHistoryProvider: Exception in accessing > modification time of > > hdfs://my-hadoop/user/spark/applicationHistory/application_1431044565372_11706 > java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1900) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1885) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104) > at > > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716) > at > > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) > at > > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$getModificationTime(FsHistoryProvider.scala:236) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:182) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) > at > scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) > at > > org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) > at > > org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > 2015-05-21 11:38:03,790 ERROR > org.apache.spark.scheduler.EventLoggingListener: Exception in parsing > logging info from directory > > hdfs://my-hadoop/user/spark/applicationHistory/application_1431044565372_12048 > java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1900) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1885) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104) > at > > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716) > at > > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) > at > > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712) > at > > org.apache.spark.scheduler.EventLoggingListener$.parseLoggingInfo(EventLoggingListener.scala:199) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$createReplayBus(FsHistoryProvider.scala:226) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:174) > at > > org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at > scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) > at > scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) > at > scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) > at > org.apache.spark.deploy.history.FsHistoryProvider.org > $apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) > at > > org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) > at > > org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > > org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) > at > org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) > > > Any idea whats going here ? > > Thanks > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Spark-HistoryServer-not-coming-up-tp22975.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > > -- Marcelo