steveloughran commented on pull request #3222: URL: https://github.com/apache/hadoop/pull/3222#issuecomment-887428882
> if onReadFailure called when detecting wrappedStream==null is failed, then the onReadFailure will be called again in the block of catch (IOException e) { , My intention is to let the retry do that. I'm thinking: 1. pull the `if (!wrappesStream) onReadFailure()` out of try/catch, so that double check doesn't happen 2. the exception handling to only close the wrapped stream, not try to reopen: ```java if (wrappedStream == null) { // trigger a re-open. reopen("failure recovery", getPos(), 1, false); } try { b = wrappedStream.read(); } catch (EOFException e) { return -1; } catch (SocketTimeoutException e) { onReadFailure(e, 1, true); throw e; } catch (IOException e) { onReadFailure(e, 1, false); throw e; } ``` Critically, onReadFailure will stop trying to reopen the stream, instead it release (or, socket exception: force breaks) the stream. Removes `throws IOE` from its signature. ``` private void onReadFailure(IOException ioe, int length, boolean forceAbort) { if (LOG.isDebugEnabled()) { LOG.debug("Got exception while trying to read from stream {}, " + "client: {} object: {}, trying to recover: ", uri, client, object, ioe); } else { LOG.info("Got exception while trying to read from stream {}, " + "client: {} object: {}, trying to recover: " + ioe, uri, client, object); } streamStatistics.readException(); closeStream("failure recovery", contentRangeFinish, forceAbort); // HERE } ``` so: 1. there's no attempt to reopen the stream (so cannot raise IOE any more). The exception raise in the initial read() failure is the one raised to the S3ARetryPolicy. 2. the new reopen (note, there's one hidden in lazySeek) does the reconnect; if it fails it is thrown to the retry policy on the second + attempt. Result: * no risk of a reopen() exception overriding the initial failure * the moved reopen() call has gone from being a special case only encountered after a double IOE to that on a single failure, so gets better coverage. * initial read failure will encounter the initial brief retry delay before trying to reconnect. I don't see that slowing down the operation at all as it was going to happen anyway. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org