Good catch, yes, it's the scaladoc of addShutdownHook that is wrong.
It says lower priority executes firs.t
The implementation seems to do the opposite. It uses a min queue of
shutdown hooks, but inverts the notion of ordering to execute higher
priority values first.
The constants and comments in
Hi,
When ApplicationMaster runs it registers a shutdown hook [1] that
(quoting the comment [2] from the code):
> // This shutdown hook should run *after* the SparkContext is shut down.
And so it gets priority lower than SparkContext [3], i.e.
val priority = ShutdownHookManager.SPARK_CONTEXT_SHU