Wei-Chiu Chuang created HDFS-14774:
--------------------------------------

             Summary: Improve RouterWebhdfsMethods#chooseDatanode() error 
handling
                 Key: HDFS-14774
                 URL: https://issues.apache.org/jira/browse/HDFS-14774
             Project: Hadoop HDFS
          Issue Type: Improvement
            Reporter: Wei-Chiu Chuang


 HDFS-13972 added the following code:

{code}
try {
      dns = rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
    } catch (IOException e) {
      LOG.error("Cannot get the datanodes from the RPC server", e);
    } finally {
      // Reset ugi to remote user for remaining operations.
      RouterRpcServer.resetCurrentUser();
    }

    HashSet<Node> excludes = new HashSet<Node>();
    if (excludeDatanodes != null) {
      Collection<String> collection =
          getTrimmedStringCollection(excludeDatanodes);
      for (DatanodeInfo dn : dns) {
        if (collection.contains(dn.getName())) {
          excludes.add(dn);
        }
      }
    }
{code}
If {{rpcServer.getDatanodeReport()}} throws an exception, {{dns}} will become 
null. This does't look like the best way to handle the exception. Should router 
retry upon exception? Does it perform retry automatically under the hood?


[~crh] [~brahmareddy]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to