[
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831878#comment-17831878
]
ASF GitHub Bot commented on HADOOP-18679:
-----------------------------------------
steveloughran commented on PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#issuecomment-2025591287
FYI i want to pull the rate limiter API of #6596 in here too; we'd have a
rate limiter in s3a store which if enabled would limit #of deletes which can be
issued on a bucket. Ideally it'd be at 3000 on s3 standard, off for s3 express
and third party stores, so reduce load this call can generate.
> Add API for bulk/paged object deletion
> --------------------------------------
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.3.5
> Reporter: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual
> files to delete -files which may be scattered round the bucket for better
> read peformance.
> Add some new optional interface for an object store which allows a caller to
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit,
> without any probes first.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]