[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979771#comment-16979771
 ] 

Yang Yun commented on HDFS-14993:
---------------------------------

The check in makeInstance is for StorageLocation not for volume, it only checks 
the base diretory. if we set the diretoris of BlockPoolSlice to readonly, the 
disk error can't be found.

For example, set the finalized diretory as below, the dn can restarted withou 
any error.

chmod u-x 
/tmp/hadoop-yang/dfs/data/current/BP-1775391891-127.0.1.1-1574316846324/current/finalized

> checkDiskError doesn't work during datanode startup
> ---------------------------------------------------
>
>                 Key: HDFS-14993
>                 URL: https://issues.apache.org/jira/browse/HDFS-14993
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Yang Yun
>            Assignee: Yang Yun
>            Priority: Major
>         Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to