steveloughran opened a new pull request, #5859:
URL: https://github.com/apache/hadoop/pull/5859

   
   This change has all of PR #5689 *except* for changing the default value of 
marker retention from keep to delete.
   
   1. leaves the default value of fs.s3a.directory.marker.retention at "delete"
   2. no longer prints a message when an S3A FS instances is instantiated with 
any option other than delete.
   3. Updates the directory marker documentation
   
   Switching to marker retention improves performance on any S3 bucket as there 
are no needless marker DELETE requests -leading to a reduction in write IOPS 
and and any delays waiting for the DELETE call to finish.
   
   There are *very* significant improvements on versioned buckets, where 
tombstone markers slow down LIST operations: the more tombstones there are, the 
worse query planning gets.
   
   Having versioning enabled on production stores is the foundation of any data 
protection strategy, so this has tangible benefits in production.
   
   Marker deletion is *not* compatible with older hadoop releases; specifically
   - Hadoop branch 2 < 2.10.2
   - Any release of Hadoop 3.0.x and Hadoop 3.1.x
   - Hadoop 3.2.0 and 3.2.1
   - Hadoop 3.3.0 Incompatible releases have no problems reading data in stores 
where markers are retained, but can get confused when deleting or renaming 
directories.
   
   Contributed by Steve Loughran
   
   Change-Id: Ic9a05357a4b1b1ff6dfecf8b0f30e1eeedb2fe75
   
   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to