[ https://issues.apache.org/jira/browse/HADOOP-17687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17343031#comment-17343031 ]
Anoop Sam John edited comment on HADOOP-17687 at 5/12/21, 7:43 AM: ------------------------------------------------------------------- >From driver the operation time out been hard coded to 90sec which is passed in >the HTTP req. But server is not honoring this at all as cap timeout at server >end is 30 sec for DELETE op. Also in our driver we set the read timeout on the socket (we try read the op response ) to be 30 sec. So ya ideally by all means 30 sec is the max time for delete today. was (Author: anoop.hbase): >From drive the operation time out been hard coded to 90sec which is passed in >the HTTP req. But server is not honoring this at all as cap timeout at server >end is 30 sec for DELETE op. Also in our driver we set the read timeout on the socket (we try read the op response ) to be 30 sec. So ya ideally by all means 30 sec is the max time for delete today. > ABFS: delete call sets Socket timeout lesser than query timeout leading to > failures > ----------------------------------------------------------------------------------- > > Key: HADOOP-17687 > URL: https://issues.apache.org/jira/browse/HADOOP-17687 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure > Affects Versions: 3.3.0 > Reporter: Prabhu Joseph > Priority: Minor > > ABFS Driver sets Socket timeout to 30 seconds and query timeout to 90 > seconds. The client will fail with SocketTimeoutException when the delete > path has huge number of dirs/files before the actual query timeout. The > socket timeout has to be greater than query timeout value. And it is good to > have this timeout configurable to avoid failures when delete call takes more > than the hardcoded configuration. > {code} > 21/03/26 09:24:00 DEBUG services.AbfsClient: First execution of REST > operation - DeletePath > ......... > 21/03/26 09:24:30 DEBUG services.AbfsClient: HttpRequestFailure: > 0,,cid=bf4e4d0b,rid=,sent=0,recv=0,DELETE,https://prabhuAbfs.dfs.core.windows.net/general/output/_temporary?timeout=90&recursive=true > java.net.SocketTimeoutException: Read timed out > at java.net.SocketInputStream.socketRead0(Native Method) > at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) > at java.net.SocketInputStream.read(SocketInputStream.java:171) > at java.net.SocketInputStream.read(SocketInputStream.java:141) > at org.wildfly.openssl.OpenSSLSocket.read(OpenSSLSocket.java:423) > at > org.wildfly.openssl.OpenSSLInputStream.read(OpenSSLInputStream.java:41) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) > at java.io.BufferedInputStream.read(BufferedInputStream.java:345) > at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:743) > at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) > at > sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1593) > at > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498) > at > java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) > at > sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352) > at > org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.processResponse(AbfsHttpOperation.java:303) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:192) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:134) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.deletePath(AbfsClient.java:462) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.delete(AzureBlobFileSystemStore.java:558) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.delete(AzureBlobFileSystem.java:339) > at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:121) > at > org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331) > at > org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304) > at > org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286) > at > org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270) > at > org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120) > at org.apache.hadoop.fs.shell.Command.run(Command.java:177) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:328) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:391) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org