[ 
https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16199303#comment-16199303
 ] 

Chen Liang commented on HDFS-12626:
-----------------------------------

There are definitely different ways to handle this, but the simpler solution 
I'm thinking is that, simply having a thread that periodically checks all the 
open key entries, if they have been there for a very long time, say, more than 
X hours, we treat them as dead entries and remove them. The tricky thing is 
what this X should be. Because if X is too small, then the client might still 
be writing after X hours. Basically, X should be longer than the time any 
single key write would take. Here I'm thinking of something like X = 24 hours. 
Because I don't see a use case where a single key write would take this long 
(maybe I'm wrong). Also, since I assume client crash is relatively rare so the 
number of dead entries shouldn't be too large, so it should be okay to reclaim 
them only once every day.

Any thoughts? [~xyao] [~anu]

> Ozone : delete open key entries that will no longer be closed
> -------------------------------------------------------------
>
>                 Key: HDFS-12626
>                 URL: https://issues.apache.org/jira/browse/HDFS-12626
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Chen Liang
>            Assignee: Chen Liang
>
> HDFS-12543 introduced the notion of "open key" where when a key is opened, an 
> open key entry gets persisted, only after client calls a close will this 
> entry be made visible. One issue is that if the client does not call close 
> (e.g. failed), then that open key entry will never be deleted from meta data. 
> This JIRA tracks this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to