I see. There is a bug in 1.4.1 that a thread pool is not set the daemon
flag for threads (
https://github.com/apache/spark/commit/346209097e88fe79015359e40b49c32cc0bdc439#diff-25124e4f06a1da237bf486eceb1f7967L47
)
So in 1.4.1, even if your main thread exits, threads in the thread pool is
still run
Thanks for your response.
As a notice that , when my spark version is 1.4.1 when that kind of error won’t
cause driver stop. Another wise spark 1.5.2 will cause driver stop, I think
there must be some change. As I notice the code @spark 1.5.2
JobScheduler.scala : jobScheduler.reportError("Err