[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15550297#comment-15550297 ]
ASF GitHub Bot commented on HADOOP-13560: ----------------------------------------- Github user cnauroth commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r82090027 --- Diff: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md --- @@ -1250,6 +1569,144 @@ can be used: Using the explicit endpoint for the region is recommended for speed and the ability to use the V4 signing API. + +## "Timeout waiting for connection from pool" when writing to S3A --- End diff -- I tried an `mvn site` build, and it looks like the new troubleshooting sections still aren't nested correctly. I believe it should be `###` instead of `##`. > S3ABlockOutputStream to support huge (many GB) file writes > ---------------------------------------------------------- > > Key: HADOOP-13560 > URL: https://issues.apache.org/jira/browse/HADOOP-13560 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 2.9.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Minor > Attachments: HADOOP-13560-branch-2-001.patch, > HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, > HADOOP-13560-branch-2-004.patch > > > An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights > that metadata isn't copied on large copies. > 1. Add a test to do that large copy/rname and verify that the copy really > works > 2. Verify that metadata makes it over. > Verifying large file rename is important on its own, as it is needed for very > large commit operations for committers using rename -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org