[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025322#comment-17025322
 ] 

Ayush Saxena commented on HDFS-14993:
-------------------------------------

Committed to trunk.

Thanx [~hadoop_yangyun]  for the contribution, [~sodonnell]  and [~weichiu] for 
the reviews!!!

> checkDiskError doesn't work during datanode startup
> ---------------------------------------------------
>
>                 Key: HDFS-14993
>                 URL: https://issues.apache.org/jira/browse/HDFS-14993
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Yang Yun
>            Assignee: Yang Yun
>            Priority: Major
>         Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to