[ https://issues.apache.org/jira/browse/HADOOP-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344136#comment-16344136 ]
Steve Loughran commented on HADOOP-15191: ----------------------------------------- The patch I'm working on now (bigger, passing tests) doesn't contain any attempts to recover from partially failed deletes. That's a more complex issue which need to be implemented and tested more broadly, and is only relevant when you are mixing permissions down a tree. As S3A doesn't yet even handle delete(file) properly there, this new operation isn't making things worse > Add Private/Unstable BulkDelete operations to supporting object stores for > DistCP > --------------------------------------------------------------------------------- > > Key: HADOOP-15191 > URL: https://issues.apache.org/jira/browse/HADOOP-15191 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, tools/distcp > Affects Versions: 2.9.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Major > Attachments: HADOOP-15191-001.patch > > > Large scale DistCP with the -delete option doesn't finish in a viable time > because of the final CopyCommitter doing a 1 by 1 delete of all missing > files. This isn't randomized (the list is sorted), and it's throttled by AWS. > If bulk deletion of files was exposed as an API, distCP would do 1/1000 of > the REST calls, so not get throttled. > Proposed: add an initially private/unstable interface for stores, > {{BulkDelete}} which declares a page size and offers a > {{bulkDelete(List<Path>)}} operation for the bulk deletion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org