[ 
https://issues.apache.org/jira/browse/HADOOP-18339?focusedWorklogId=796936&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796936
 ]

ASF GitHub Bot logged work on HADOOP-18339:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Aug/22 15:31
            Start Date: 01/Aug/22 15:31
    Worklog Time Spent: 10m 
      Work Description: monthonk opened a new pull request, #4669:
URL: https://github.com/apache/hadoop/pull/4669

   ### Description of PR
   
   HADOOP-18339. S3A storage class option only picked up when buffering writes 
to disk.
   
   The problem is that there are two `createPutObjectRequest` classes, one for 
source file and another one for input stream. Previously, only the first one is 
picking up storage class option because I thought the second one is used only 
by creating directory marker which should not have any storage class.
   
   This is fixed it by making both of `createPutObjectRequest` classes pick up 
the storage class option and updating `newDirectoryMarkerRequest` to create PUT 
request on its own then add parameterized test to verify it.
   
   ### How was this patch tested?
   
   Tested with a bucket in `eu-west-1` with `mvn -Dparallel-tests 
-DtestsThreadCount=16 clean verify`
   
   ```
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 1149, Failures: 0, Errors: 0, Skipped: 146
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 124, Failures: 0, Errors: 0, Skipped: 10
   ```
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
-------------------

            Worklog Id:     (was: 796936)
    Remaining Estimate: 0h
            Time Spent: 10m

> S3A storage class option only picked up when buffering writes to disk
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-18339
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18339
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.9
>            Reporter: Steve Loughran
>            Assignee: Monthon Klongklaew
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> when you switch s3a output stream buffering to heap or byte buffer, the 
> storage class option isn't added to the put request
> {code}
>   <property>
>     <name>fs.s3a.fast.upload.buffer</name>
>     <value>bytebuffer</value>
>   </property>
> {code}
> and the ITestS3AStorageClass tests fail.
> {code}
> java.lang.AssertionError: [Storage class of object 
> s3a://stevel-london/test/testCreateAndCopyObjectWithStorageClassGlacier/file1]
>  
> Expecting:
>  <null>
> to be equal to:
>  <"glacier">
> ignoring case considerations
>       at 
> org.apache.hadoop.fs.s3a.ITestS3AStorageClass.assertObjectHasStorageClass(ITestS3AStorageClass.java:215)
>       at 
> org.apache.hadoop.fs.s3a.ITestS3AStorageClass.testCreateAndCopyObjectWithStorageClassGlacier(ITestS3AStorageClass.java:129)
> {code}
> we noticed this in a code review; the request factory only sets the option 
> when the source is a file, not memory.
> proposed: parameterize the test suite on disk/byte buffer, then fix



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to