[ 
https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17573824#comment-17573824
 ] 

Monthon Klongklaew commented on HADOOP-12020:
---------------------------------------------

I have added new tests for byte/heap buffer writes and addressed the issue in 
this PR [#4669|https://github.com/apache/hadoop/pull/4669]

also, the PR which fixes s3 select tests is ready for review here 
[#4489|https://github.com/apache/hadoop/pull/4489]

> Support configuration of different S3 storage classes
> -----------------------------------------------------
>
>                 Key: HADOOP-12020
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12020
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.0
>         Environment: Hadoop on AWS
>            Reporter: Yann Landrin-Schweitzer
>            Assignee: Monthon Klongklaew
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.3.9
>
>          Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects.
> This offers, according to Amazon's material, 99.99999999% reliability.
> For many applications, however, the 99.99% reliability offered by the 
> REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a 
> significant cost saving.
> HDFS, when using the legacy s3n protocol, or the new s3a scheme, should 
> support overriding the default storage class of created s3 objects so that 
> users can take advantage of this cost benefit.
> This would require minor changes of the s3n and s3a drivers, using 
> a configuration property fs.s3n.storage.class to override the default storage 
> when desirable. 
> This override could be implemented in Jets3tNativeFileSystemStore with:
>       S3Object object = new S3Object(key);
>       ...
>       if(storageClass!=null)  object.setStorageClass(storageClass);
> It would take a more complex form in s3a, e.g. setting:
>     InitiateMultipartUploadRequest initiateMPURequest =
>         new InitiateMultipartUploadRequest(bucket, key, om);
>     if(storageClass !=null ) {
>         initiateMPURequest = 
> initiateMPURequest.withStorageClass(storageClass);
>     }
> and similar statements in various places.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to