[ 
https://issues.apache.org/jira/browse/HDFS-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16410549#comment-16410549
 ] 

Ajay Kumar commented on HDFS-13277:
-----------------------------------

[~bharatviswa] thanks for working on this. Patch 2 looks good. Some suggestions:
 * FsDatasetAsyncDiskService L346: Do we need this else if check as previous 
else checks if it is equal.
 * javadoc for ReplicaTrashInfo & FsDatasetAsyncDiskService#replicaTrashInfoMap
 * Rename ReplicaTrashInfo to ReplicaTrashCurDirInfo ?
 * FsDatasetAsyncDiskService L102: rename replicaTrashSubDirMax to 
replicaTrashSubDirMaxBlocks or replicaTrashSubDirMaxFiles ?

> Improve move to account for usage (number of files) to limit trash dir size
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-13277
>                 URL: https://issues.apache.org/jira/browse/HDFS-13277
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>         Attachments: HDFS-13277-HDFS-12996.00.patch, 
> HDFS-13277-HDFS-12996.01.patch, HDFS-13277-HDFS-12996.02.patch
>
>
> The trash subdirectory maximum entries. This puts an upper limit on the size 
> of subdirectories in replica-trash. Set this default value to 
> blockinvalidateLimit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to