[
https://issues.apache.org/jira/browse/HDFS-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101973#comment-15101973
]
Colin Patrick McCabe commented on HDFS-7764:
--------------------------------------------
Thanks for working on this, [~rakeshr].
{code}
@@ -737,8 +739,7 @@ private void addDifference(LinkedList<ScanInfo> diffRecord,
}
} catch (Exception ex) {
LOG.error("Error compiling report", ex);
- // Propagate ex to DataBlockScanner to deal with
- throw new RuntimeException(ex);
+ // Ignore this exception and continue scanning the other directories
}
}
{code}
Hmm. I think we should print the storageID of the volume that had a problem.
Also, I'm not sure the comment makes sense since we're not "ignoring" the
exception-- we are logging it. Maybe just comment "continue scanning the other
volumes?"
> DirectoryScanner shouldn't abort the scan if one directory had an error
> -----------------------------------------------------------------------
>
> Key: HDFS-7764
> URL: https://issues.apache.org/jira/browse/HDFS-7764
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 2.7.0
> Reporter: Rakesh R
> Assignee: Rakesh R
> Attachments: HDFS-7764-01.patch, HDFS-7764-02.patch,
> HDFS-7764-03.patch, HDFS-7764.patch
>
>
> If there is an exception while preparing the ScanInfo for the blocks in the
> directory, DirectoryScanner is immediately throwing exception and coming out
> of the current scan cycle. The idea of this jira is to discuss & improve the
> exception handling mechanism.
> DirectoryScanner.java
> {code}
> for (Entry<Integer, Future<ScanInfoPerBlockPool>> report :
> compilersInProgress.entrySet()) {
> try {
> dirReports[report.getKey()] = report.getValue().get();
> } catch (Exception ex) {
> LOG.error("Error compiling report", ex);
> // Propagate ex to DataBlockScanner to deal with
> throw new RuntimeException(ex);
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)