[
https://issues.apache.org/jira/browse/HDFS-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wei-Chiu Chuang resolved HDFS-14774.
------------------------------------
Resolution: Not A Problem
Thanks CR. I'm resolving it.
> RBF: Improve RouterWebhdfsMethods#chooseDatanode() error handling
> -----------------------------------------------------------------
>
> Key: HDFS-14774
> URL: https://issues.apache.org/jira/browse/HDFS-14774
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Wei-Chiu Chuang
> Assignee: CR Hota
> Priority: Minor
>
> HDFS-13972 added the following code:
> {code}
> try {
> dns = rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
> } catch (IOException e) {
> LOG.error("Cannot get the datanodes from the RPC server", e);
> } finally {
> // Reset ugi to remote user for remaining operations.
> RouterRpcServer.resetCurrentUser();
> }
> HashSet<Node> excludes = new HashSet<Node>();
> if (excludeDatanodes != null) {
> Collection<String> collection =
> getTrimmedStringCollection(excludeDatanodes);
> for (DatanodeInfo dn : dns) {
> if (collection.contains(dn.getName())) {
> excludes.add(dn);
> }
> }
> }
> {code}
> If {{rpcServer.getDatanodeReport()}} throws an exception, {{dns}} will become
> null. This does't look like the best way to handle the exception. Should
> router retry upon exception? Does it perform retry automatically under the
> hood?
> [~crh] [~brahmareddy]
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]