[ 
https://issues.apache.org/jira/browse/HADOOP-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14326020#comment-14326020
 ] 

Hudson commented on HADOOP-11570:
---------------------------------

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2059 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2059/])
HADOOP-11570. S3AInputStream.close() downloads the remaining bytes of the 
object from S3. (Dan Hecht via stevel). (stevel: rev 
826267f789df657c62f7f5909e5a0b1a7b102c34)
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> S3AInputStream.close() downloads the remaining bytes of the object from S3
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-11570
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11570
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.6.0
>            Reporter: Dan Hecht
>            Assignee: Dan Hecht
>             Fix For: 2.7.0
>
>         Attachments: HADOOP-11570-001.patch, HADOOP-11570-002.patch
>
>
> Currently, S3AInputStream.close() calls S3Object.close().  But, 
> S3Object.close() will read the remaining bytes of the S3 object, potentially 
> transferring a lot of bytes from S3 that are discarded.  Instead, the wrapped 
> stream should be aborted to avoid transferring discarded bytes (unless the 
> preceding read() finished at contentLength).  For example, reading only the 
> first byte of a 1 GB object and then closing the stream will result in all 1 
> GB transferred from S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to