[ https://issues.apache.org/jira/browse/SPARK-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tao Wang closed SPARK-6933. --------------------------- Resolution: Duplicate > Thrift Server couldn't strip .inprogress suffix after being stopped > ------------------------------------------------------------------- > > Key: SPARK-6933 > URL: https://issues.apache.org/jira/browse/SPARK-6933 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.3.0 > Reporter: Tao Wang > > When I stop the thrift server using stop-thriftserver.sh, there comes the > exception: > 15/04/15 21:48:53 INFO Utils: path = > /tmp/spark-f05dd451-46a8-47d0-836b-a25004f87ed9/blockmgr-971f5b1c-33ed-4be6-ac63-2fbb739bc649, > already present as root for deletion. > 15/04/15 21:48:53 ERROR LiveListenerBus: Listener EventLoggingListener threw > an exception > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:601) > at > org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144) > at > org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144) > at scala.Option.foreach(Option.scala:236) > at > org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144) > at > org.apache.spark.scheduler.EventLoggingListener.onApplicationEnd(EventLoggingListener.scala:188) > at > org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:54) > at > org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31) > at > org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31) > at > org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:53) > at > org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37) > at > org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79) > at > org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1197) > at > org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63) > Caused by: java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795) > at > org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1985) > at > org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1946) > at > org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130) > ... 17 more > 15/04/15 21:48:53 INFO ThriftCLIService: Thrift server has stopped > 15/04/15 21:48:53 INFO AbstractService: Service:ThriftBinaryCLIService is > stopped. > 15/04/15 21:48:53 INFO HiveMetaStore: 1: Shutting down the object store... > 15/04/15 21:48:53 INFO audit: ugi=root ip=unknown-ip-addr cmd=Shutting > down the object store... > 15/04/15 21:48:53 INFO HiveMetaStore: 1: Metastore shutdown complete. > 15/04/15 21:48:53 INFO audit: ugi=root ip=unknown-ip-addr cmd=Metastore > shutdown complete. > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/metrics/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages/stage/kill,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/static,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/executors/threadDump,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/executors/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/executors,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/environment/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/environment,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/storage/rdd/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/storage/rdd,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/storage/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/storage,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages/pool/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages/pool,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages/stage/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages/stage,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/stages,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/jobs/job/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/jobs/job,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/jobs/json,null} > 15/04/15 21:48:53 INFO ContextHandler: stopped > o.s.j.s.ServletContextHandler{/jobs,null} > 15/04/15 21:48:53 INFO AbstractService: Service:OperationManager is stopped. > 15/04/15 21:48:53 INFO AbstractService: Service:SessionManager is stopped. > 15/04/15 21:48:53 INFO AbstractService: Service:CLIService is stopped. > 15/04/15 21:48:53 INFO AbstractService: Service:HiveServer2 is stopped. > 15/04/15 21:48:53 INFO SparkUI: Stopped Spark web UI at > http://10.177.112.153:4040 > 15/04/15 21:48:53 INFO DAGScheduler: Stopping DAGScheduler > 15/04/15 21:48:53 INFO YarnClientSchedulerBackend: Shutting down all executors > 15/04/15 21:48:53 INFO YarnClientSchedulerBackend: Interrupting monitor thread > 15/04/15 21:48:53 INFO YarnClientSchedulerBackend: Asking each executor to > shut down > 15/04/15 21:48:53 INFO YarnClientSchedulerBackend: Stopped > Exception in thread "Thread-37" java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1986) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.spark.scheduler.EventLoggingListener.stop(EventLoggingListener.scala:209) > at > org.apache.spark.SparkContext$$anonfun$stop$5.apply(SparkContext.scala:1423) > at > org.apache.spark.SparkContext$$anonfun$stop$5.apply(SparkContext.scala:1423) > at scala.Option.foreach(Option.scala:236) > at org.apache.spark.SparkContext.stop(SparkContext.scala:1423) > at > org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.stop(SparkSQLEnv.scala:69) > at > org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$$anon$1.run(HiveThriftServer2.scala:64) > Looks like dfs client is closed before EventLoggingListener.logEvent or > EventLoggingListner.stop being invoked. I tried to figure out why but > couldn't get a clue. > Note: I tested on yarn mode( on branch master and 1.3.0). Haven't on other > cluster manager yet. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org