[ https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiao Chen updated HDFS-13511: ----------------------------- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks Gabor for the contribution, and others for ideas/reviews. > Provide specialized exception when block length cannot be obtained > ------------------------------------------------------------------ > > Key: HDFS-13511 > URL: https://issues.apache.org/jira/browse/HDFS-13511 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Ted Yu > Assignee: Gabor Bota > Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch, > HDFS-13511.003.patch > > > In downstream project, I saw the following code: > {code} > FSDataInputStream inputStream = hdfs.open(new Path(path)); > ... > if (options.getRecoverFailedOpen() && dfs != null && > e.getMessage().toLowerCase() > .startsWith("cannot obtain block length for")) { > {code} > The above tightly depends on the following in DFSInputStream#readBlockLength > {code} > throw new IOException("Cannot obtain block length for " + locatedblock); > {code} > The check based on string matching is brittle in production deployment. > After discussing with [~ste...@apache.org], better approach is to introduce > specialized IOException, e.g. CannotObtainBlockLengthException so that > downstream project doesn't have to rely on string matching. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org