[ 
https://issues.apache.org/jira/browse/HDFS-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14179032#comment-14179032
 ] 

Xiaoyu Yao commented on HDFS-6988:
----------------------------------

Thanks [~cmccabe] for the feedback.  I'm picking up the comments from Arpit and 
you and start working on it. 
I appreciate if you could confirm the eviction threshold setting update below.

1. Keep dfs.datanode.ram.disk.low.watermark.percent key but change the type 
from int to float with a default value of 0.1f for 
DFS_DATANODE_RAM_DISK_LOW_WATERMARK_PERCENT_DEFAULT.

2. Replace dfs.datanode.ram.disk.low.watermark.replicas with 
dfs.datanode.ram.disk.low.watermark.bytes. The default value will be set as the 
Min(DFS_BLOCK_SIZE_DEFAULT(128MB), 
DFS_DATANODE_RAM_DISK_LOW_WATERMARK_PERCENT_DEFAULT*totalRamDiskSize) if 
unspecified.

3. The final eviction threshold will be determined by the minimum free ramdisk 
space in bytes of 1 and 2.


> Add configurable limit for percentage-based eviction threshold
> --------------------------------------------------------------
>
>                 Key: HDFS-6988
>                 URL: https://issues.apache.org/jira/browse/HDFS-6988
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: HDFS-6581
>            Reporter: Arpit Agarwal
>            Assignee: Xiaoyu Yao
>             Fix For: HDFS-6581
>
>         Attachments: HDFS-6988.01.patch, HDFS-6988.02.patch
>
>
> Per feedback from [~cmccabe] on HDFS-6930, we can make the eviction 
> thresholds configurable. The hard-coded thresholds may not be appropriate for 
> very large RAM disks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to