[ 
https://issues.apache.org/jira/browse/HADOOP-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12535309
 ] 

Doug Cutting commented on HADOOP-2062:
--------------------------------------

> we could probably hijack org.apache.hadoop.util.Daemon since it isn't used 
> anywhere

It does not appear to be used in Nutch, but it is used extensively in the dfs 
package.  To be safest we might deprecate it and replace it with DaemonThread.

> Standardize long-running, daemon-like, threads in hadoop daemons
> ----------------------------------------------------------------
>
>                 Key: HADOOP-2062
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2062
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs, mapred
>            Reporter: Arun C Murthy
>            Assignee: Arun C Murthy
>             Fix For: 0.16.0
>
>
> There are several long-running, independent, threads in hadoop daemons 
> (atleast in the JobTracker - e.g. ExpireLaunchingTasks, ExpireTrackers, 
> TaskCommitQueue etc.) which need to be alive as long as the daemon itself and 
> hence should be impervious to various errors and exceptions (e.g. 
> HADOOP-2051). 
> Currently, each of them seem to be hand-crafted (again, specifically the 
> JobTracker) and different from the other.
> I propose we standardize on an implementation of a long-running, impervious, 
> daemon-thread which can be used all over the shop. That thread should be 
> explicitly shut-down by the hadoop daemon and shouldn't be vulnerable to any 
> exceptions/errors.
> This mostly likely will look like this:
> {noformat}
> public abstract class DaemonThread extends Thread {
>   public static final Log LOG = LogFactory.getLog(DaemonThread.class);
>   {
>     setDaemon(true);                              // always a daemon
>   }
>   public abstract void innerLoop() throws InterruptedException;
>   
>   public final void run() {
>     while (!isInterrupted()) {
>       try {
>         innerLoop();
>       } catch (InterruptedException ie) {
>         LOG.warn(getName() + " interrupted, exiting...");
>       } catch (Throwable t) {
>         LOG.error(getName() + " got an exception: " + 
>                   StringUtils.stringifyException(t));
>       }
>     }
>   }
> }
> {noformat}
> In fact, we could probably hijack org.apache.hadoop.util.Daemon since it 
> isn't used anywhere (Doug is it still used in nutch?) or atleast sub-class 
> that.
> Thoughts? Could someone from hdfs/hbase chime in?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to