[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17775617#comment-17775617 ]
ASF GitHub Bot commented on HADOOP-18883: ----------------------------------------- saxenapranav commented on PR #6022: URL: https://github.com/apache/hadoop/pull/6022#issuecomment-1764046267 Hi @steveloughran @mehakmeet , requesting your kind review. Thank you so much. > Expect-100 JDK bug resolution: prevent multiple server calls > ------------------------------------------------------------ > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure > Reporter: Pranav Saxena > Assignee: Pranav Saxena > Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org