[ 
https://issues.apache.org/jira/browse/HADOOP-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18008165#comment-18008165
 ] 

ASF GitHub Bot commented on HADOOP-15224:
-----------------------------------------

raphaelazzolini commented on PR #7396:
URL: https://github.com/apache/hadoop/pull/7396#issuecomment-3090764761

   @steveloughran this works with object lock, I just ran the following test:
   
   * Created a new S3 bucket with object lock enabled
   * Executed the command bellow, expecting the first test (without checksum 
config) to fail
   
   ```
   mvn clean verify -Dit.test=TestS3AEncryption,TestChecksumSupport -Dtest=none
   ```
   
   #### Results
   
   ```
   [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesArrayBlocks
   [ERROR] Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 
3.64 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesArrayBlocks
   [ERROR] 
test_010_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesArrayBlocks)
  Time elapsed: 1.387 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSBadRequestException: PUT 0-byte object  on 
job-00/test: software.amazon.awssdk.services.s3.model.InvalidRequestException: 
Content-MD5 OR x-amz-checksum- HTTP header is required for Put Object requests 
with Object Lock parameters (Service: S3, Status Code: 400, Request ID: 
K7HJNJ5F2YPMDKPT, Extended Request ID: 
satzMHpF3kUamAsECmEAg9cYPLAUpAKVhdIX9kMWR7di5KycSDCvlxGKcnmIPD3HGCIg7WSOels=):InvalidRequest:
 Content-MD5 OR x-amz-checksum- HTTP header is required for Put Object requests 
with Object Lock parameters (Service: S3, Status Code: 400, Request ID: 
K7HJNJ5F2YPMDKPT, Extended Request ID: 
satzMHpF3kUamAsECmEAg9cYPLAUpAKVhdIX9kMWR7di5KycSDCvlxGKcnmIPD3HGCIg7WSOels=)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:265)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
   (...)
   [INFO] Running org.apache.hadoop.fs.s3a.ITestS3AChecksum
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.259 
s - in org.apache.hadoop.fs.s3a.ITestS3AChecksum
   [INFO]
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [ERROR]   
ITestS3AHugeFilesArrayBlocks>AbstractSTestS3AHugeFiles.setup:102->S3AScaleTestBase.setup:85->AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   [INFO]
   [ERROR] Tests run: 11, Failures: 0, Errors: 10, Skipped: 0
   ```
   
   ---
   
   Then, I added the checksum to the config and ran the same tests again.
   
   ```
     <property>
       <name>fs.s3a.create.checksum.algorithm</name>
       <value>CRC32C</value>
     </property>
   ```
   
   #### Results
   
   ```
   [INFO] -------------------------------------------------------
   [INFO]  T E S T S
   [INFO] -------------------------------------------------------
   [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesArrayBlocks
   [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
9.636 s - in org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesArrayBlocks
   [INFO] Running org.apache.hadoop.fs.s3a.ITestS3AChecksum
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.05 
s - in org.apache.hadoop.fs.s3a.ITestS3AChecksum
   [INFO]
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0
   ```




> S3A: Add option to set checksum on S3 objects
> ---------------------------------------------
>
>                 Key: HADOOP-15224
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15224
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0
>            Reporter: Steve Loughran
>            Assignee: Raphael Azzolini
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 3.5.0, 3.4.2
>
>
> The option  fs.s3a.create.checksum.algorithm allows checksums to be set on 
> file upload; It supports the following values:
>     'CRC32', 'CRC32C', 'SHA1', and 'SHA256'
> This can protect against corruption of data even before uploading commences, 
> and enables support for buckets with S3 Object Lock activated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to