[ 
https://issues.apache.org/jira/browse/SPARK-47383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-47383:
-------------------------------------

    Assignee: Rob Reeves

> Make the shutdown hook timeout configurable
> -------------------------------------------
>
>                 Key: SPARK-47383
>                 URL: https://issues.apache.org/jira/browse/SPARK-47383
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 4.0.0
>            Reporter: Rob Reeves
>            Assignee: Rob Reeves
>            Priority: Minor
>              Labels: pull-request-available
>
> org.apache.spark.util.ShutdownHookManager is used to register custom shutdown 
> operations. This is not easily configurable. The underlying 
> org.apache.hadoop.util.ShutdownHookManager has a default timeout of 30 
> seconds.  It can be configured by setting hadoop.service.shutdown.timeout, 
> but this must be done in the core-site.xml/core-default.xml because a new 
> hadoop conf object is created and there is no opportunity to modify it.
> org.apache.hadoop.util.ShutdownHookManager provides an overload to pass a 
> custom timeout. Spark should use that and allow a user defined timeout to be 
> used.
> This is useful because we see timeouts during shutdown and want to give some 
> extra time for the event queues to drain to avoid log data loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to