[ https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran resolved HADOOP-16900. ------------------------------------- Fix Version/s: 3.4.0 Resolution: Fixed in trunk; rebuilding and retesting branch-3.3 with it too > Very large files can be truncated when written through S3AFileSystem > -------------------------------------------------------------------- > > Key: HADOOP-16900 > URL: https://issues.apache.org/jira/browse/HADOOP-16900 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Affects Versions: 3.2.1 > Reporter: Andrew Olson > Assignee: Mukund Thakur > Priority: Major > Labels: s3 > Fix For: 3.4.0 > > > If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt > truncation of the S3 object will occur as the maximum number of parts in a > multipart upload is 10,000 as specific by the S3 API and there is an apparent > bug where this failure is not fatal, and the multipart upload is allowed to > be marked as completed. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org