[ https://issues.apache.org/jira/browse/HADOOP-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333182#comment-14333182 ]
Thomas Demoor commented on HADOOP-11183: ---------------------------------------- Addendum to my previous comment: to be more clear on the flush(): minimum size of 5MB is for a partUpload (of a on going multi-part upload). Evidently, one can write smaller objects through a single put. The reason I think it's safe to put this in 2.7 as unstable is that the codepath is never touched when the config flag is set to false (default). It's a drop in replacement for S3AOutputStream, but is only used if the user opts in. > Memory-based S3AOutputstream > ---------------------------- > > Key: HADOOP-11183 > URL: https://issues.apache.org/jira/browse/HADOOP-11183 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 2.6.0 > Reporter: Thomas Demoor > Assignee: Thomas Demoor > Attachments: HADOOP-11183-004.patch, HADOOP-11183-005.patch, > HADOOP-11183-006.patch, HADOOP-11183.001.patch, HADOOP-11183.002.patch, > HADOOP-11183.003.patch, design-comments.pdf > > > Currently s3a buffers files on disk(s) before uploading. This JIRA > investigates adding a memory-based upload implementation. > The motivation is evidently performance: this would be beneficial for users > with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on > an S3-compatible object store (FYI: my contributions are made in name of > Amplidata). -- This message was sent by Atlassian JIRA (v6.3.4#6332)