[ 
https://issues.apache.org/jira/browse/HDFS-11945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16041561#comment-16041561
 ] 

Mingliang Liu commented on HDFS-11945:
--------------------------------------

I'm +1 on the patch.

Minor comments:
# The {{internalLeaseHolder}} value to be concatenated by _ instead of space
# The last test statement:
{code}
assertFalse(holder.equals(lm.getInternalLeaseHolder()));
{code}
Better to use:
{code}
assertNotEquals("some meaningful message", holder, lm.getInternalLeaseHolder());
{code}

> Internal lease recovery may not be retried for a long time
> ----------------------------------------------------------
>
>                 Key: HDFS-11945
>                 URL: https://issues.apache.org/jira/browse/HDFS-11945
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>         Attachments: HDFS-11945.trunk.patch
>
>
> Lease is assigned per client who is identified by its holder ID or client ID, 
> thus a renewal or an expiration of a lease affects all files being written by 
> the client.
> When a client/writer dies without closing a file, its lease expires in one 
> hour (hard limit) and the namenode tries to recover the lease. As a part of 
> the process, the namenode takes the ownership of the lease and renews it. If 
> the recovery does not finish successfully, the lease will expire in one hour 
> and the namenode will try again to recover the lease.
> However, if a file system has another lease expiring within the hour, the 
> recovery attempt for the lease will push forward the expiration of the lease 
> held by the namenode.  This causes failed lease recoveries to be not retried 
> for a long time. We have seen it happening for days.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to