steveloughran commented on a change in pull request #1795: HADOOP-16792: Make S3 client request timeout configurable URL: https://github.com/apache/hadoop/pull/1795#discussion_r370319123
########## File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md ########## @@ -983,6 +983,14 @@ options are covered in [Testing](./testing.md). <description>Select which version of the S3 SDK's List Objects API to use. Currently support 2 (default) and 1 (older API).</description> </property> + +<property> + <name>fs.s3a.connection.request.timeout</name> + <value>0</value> + <description>Controls timeout for S3 requests. + Any non-positive(0 or negative value) value disables the timeout. Review comment: needs to indicate that its now a time unit, and that playing with this number is dangerous. Proposed (here and core-default) ``` Time out on HTTP requests to the AWS service; 0 means no timeout. Measured in seconds; the usual time suffixes are all supported Important: this is the maximum duration of any AWS service call, including upload and copy operations. If non-zero, it must be larger than the time to upload multi-megabyte blocks to S3 from the client, and to rename many-GB files. Use with care. ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org