[ 
https://issues.apache.org/jira/browse/HDFS-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366196#comment-14366196
 ] 

Lei (Eddy) Xu commented on HDFS-7833:
-------------------------------------

Hi, [~cnauroth], thanks a lot for reporting this issue. I am trying to 
understand this bug. It seems to me that if we follow the same old logic, we 
need also update {{FsDdatasetImpl#validVolsRequired()}} when it removes a 
directory _intentionly_ , but do not update it when the {{FsVolumeImpl}} is 
removed due to {{checkDirs}}.  As an alternative, since now failure volumes are 
kept in {{FsDatasetImp#volumeFailureInfos}}, can we just let 
{{FsDatasetImpl#hasEnoughResources()}} become:

{code}
public boolean hasEnoughResource() {
   return volumeFailureInfos.size() < volFailuresTolerated;
}
{code}

Does it make sense to you, [~cnauroth]?

> DataNode reconfiguration does not recalculate valid volumes required, based 
> on configured failed volumes tolerated.
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7833
>                 URL: https://issues.apache.org/jira/browse/HDFS-7833
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Chris Nauroth
>            Assignee: Lei (Eddy) Xu
>
> DataNode reconfiguration never recalculates 
> {{FsDatasetImpl#validVolsRequired}}.  This may cause incorrect behavior of 
> the {{dfs.datanode.failed.volumes.tolerated}} property if reconfiguration 
> causes the DataNode to run with a different total number of volumes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to