[ https://issues.apache.org/jira/browse/SPARK-34674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17305419#comment-17305419 ]
Sergey Kotlov commented on SPARK-34674: --------------------------------------- Thanks for response, [~dongjoon] I haven't tried to use the previous versions of Spark on Kubernetes. Right now my company is using Spark on AWS EMR (it uses YARN), and we have never needed to call spark.stop() there. As far as I know, Spark uses the ShutdownHook to stop SparkContext anyway before exiting the JVM. But in my example, if I understand correctly, these non-daemon threads prevent the jvm process from exiting, even after the Main method of application has been completed. And I just thought, it can be considered as a bug. P.S. I have to migrate a large number of existing Spark batch jobs (which are owned by different people across company) from EMR to K8S, and right now it is desirable to keep the code of these jobs unchanged. > Spark app on k8s doesn't terminate without call to sparkContext.stop() method > ----------------------------------------------------------------------------- > > Key: SPARK-34674 > URL: https://issues.apache.org/jira/browse/SPARK-34674 > Project: Spark > Issue Type: Bug > Components: Kubernetes > Affects Versions: 3.1.1 > Reporter: Sergey Kotlov > Priority: Major > > Hello! > I have run into a problem that if I don't call the method > sparkContext.stop() explicitly, then a Spark driver process doesn't terminate > even after its Main method has been completed. This behaviour is different > from spark on yarn, where the manual sparkContext stopping is not required. > It looks like, the problem is in using non-daemon threads, which prevent the > driver jvm process from terminating. > At least I see two non-daemon threads, if I don't call sparkContext.stop(): > {code:java} > Thread[OkHttp kubernetes.default.svc,5,main] > Thread[OkHttp kubernetes.default.svc Writer,5,main] > {code} > Could you tell please, if it is possible to solve this problem? > Docker image from the official release of spark-3.1.1 hadoop3.2 is used. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org