shameersss1 opened a new pull request, #7722:
URL: https://github.com/apache/hadoop/pull/7722

   
   ### Description of PR
   Pending MPUs are aborted by default for S3 express store. This leads to job 
failure for use cases where the directory needs be purged before the final job 
commit, Hence disabling the pending MPUs purging for all types of buckets.
   
   ### How was this patch tested?
   
   Test with `us-east-1` with S3 express store bucket. The following tests were 
failing with and without the change
   
   ```
   ITestTreewalkProblems.testDistCp:317->lambda$testDistCp$3:318 
   
ITestTreewalkProblems.testDistCpNoIterator:340->lambda$testDistCpNoIterator$4:341
 [Exit code of distcp -update -delete -direct 
   ITestCustomSigner.testCustomSignerAndInitializer
   ITestS3AContractAnalyticsStreamVectoredRead.testVectoredReadAfterNormalRead
   ITestS3AEndpointRegion.testCentralEndpointAndNullRegionFipsWithCRUD:510 » 
AWSUnsupportedFeature
   
ITestS3AEndpointRegion.testCentralEndpointAndNullRegionWithCRUD:501->assertOpsUsingNewFs:548
 » UnknownHost
   ITestS3AEndpointRegion.testWithCrossRegionAccess:395 » UnknownHost 
getFileStat...
   
ITestS3AEndpointRegion.testWithOutCrossRegionAccess:374->lambda$testWithOutCrossRegionAccess$2:376
 » UnknownHost
   ITestConnectionTimeouts.testObjectUploadTimeouts:265 » AWSBadRequest Writing 
O...
   
ITestS3APutIfMatchAndIfNoneMatch.testIfMatchTwoMultipartUploadsRaceConditionOneClosesFirst:551
 » AWSS3IO
   
ITestS3APutIfMatchAndIfNoneMatch.testIfNoneMatchConflictOnMultipartUpload:321->lambda$testIfNoneMatchConflictOnMultipartUpload$2:322->createFileWithFlags:176
 » O
   
ITestS3APutIfMatchAndIfNoneMatch.testIfNoneMatchMultipartUploadWithRaceCondition:349
 » AWSS3IO
   
ITestS3APutIfMatchAndIfNoneMatch.testIfNoneMatchTwoConcurrentMultipartUploads:372
 » AWSS3
   ```
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to