[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780886#comment-16780886
 ] 

Justin Uang commented on HADOOP-16132:
--------------------------------------

Copying over the last comment from the github ticket since we will be 
continuing the conversation here:

[~ste...@apache.org]
{quote}BTW, one little side effect of breaking up the reads: every GET is its 
own HTTP request, so gets billed differently, and for SSE-KMS, possibly a 
separate call to AWS:KMS. Nobody quite knows about the latter, we do know that 
heavy random seek IO on a single tree in a bucket can trigger more throttling 
than you'd expect

Anyway, maybe for random IO the strategy would be to have a notion of aligned 
blocks, say 8 MB, the current block is cached as it is read in, so a backward 
seek can often work from in memory; the stream could be doing a readahead of , 
say, the next 2+ blocks in parallel & then store them in a ring of cached 
blocks ready for when they are used.

you've got me thinking now...
{quote}
 

> Support multipart download in S3AFileSystem
> -------------------------------------------
>
>                 Key: HADOOP-16132
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16132
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Justin Uang
>            Priority: Major
>         Attachments: HADOOP-16132.001.patch
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3://<bucket>/<key> - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3://<bucket>/<key> > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to