[ 
https://issues.apache.org/jira/browse/HADOOP-17261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195722#comment-17195722
 ] 

Steve Loughran commented on HADOOP-17261:
-----------------------------------------

{code}
2020-09-14 20:45:22,207 [main] DEBUG shell.Command 
(Command.java:displayError(461)) - mv failure
java.nio.file.AccessDeniedException: rename s3a://stevel-london/src to 
s3a://stevel-london/dest on s3a://stevel-london/src: 
com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more objects 
could not be deleted (Service: null; Status Code: 200; Error Code: null; 
Request ID: 005E717D632BA6AF; S3 Extended Request ID: 
o9P5GsHwIedAdCsXvpDN6JSmHi5DvV02tW234Es2eIcrGItcEMW+su2Qcy9aIEJ2VdlmtlBLUKo=), 
S3 Extended Request ID: 
o9P5GsHwIedAdCsXvpDN6JSmHi5DvV02tW234Es2eIcrGItcEMW+su2Qcy9aIEJ2VdlmtlBLUKo=:null:
 AccessDenied: src/file2: Access Denied
AccessDenied: src/file1: Access Denied
AccessDenied: src/file4: Access Denied
AccessDenied: src/file10: Access Denied
AccessDenied: src/file3: Access Denied

        at 
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteSupport.translateDeleteException(MultiObjectDeleteSupport.java:101)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:304)
        at 
org.apache.hadoop.fs.s3a.s3guard.RenameTracker.convertToIOException(RenameTracker.java:267)
        at 
org.apache.hadoop.fs.s3a.s3guard.RenameTracker.deleteFailed(RenameTracker.java:198)
        at 
org.apache.hadoop.fs.s3a.impl.RenameOperation.removeSourceObjects(RenameOperation.java:695)
        at 
org.apache.hadoop.fs.s3a.impl.RenameOperation.completeActiveCopiesAndDeleteSources(RenameOperation.java:265)
        at 
org.apache.hadoop.fs.s3a.impl.RenameOperation.endOfLoopActions(RenameOperation.java:490)
        at 
org.apache.hadoop.fs.s3a.impl.RenameOperation.recursiveDirectoryRename(RenameOperation.java:465)
        at 
org.apache.hadoop.fs.s3a.impl.RenameOperation.execute(RenameOperation.java:303)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:1526)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:1376)
        at 
org.apache.hadoop.fs.shell.MoveCommands$Rename.processPath(MoveCommands.java:124)
        at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:271)
        at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
        at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
        at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:266)
        at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
        at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
        at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:237)
        at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
Caused by: com.amazonaws.services.s3.model.MultiObjectDeleteException: One or 
more objects could not be deleted (Service: null; Status Code: 200; Error Code: 
null; Request ID: 005E717D632BA6AF; S3 Extended Request ID: 
o9P5GsHwIedAdCsXvpDN6JSmHi5DvV02tW234Es2eIcrGItcEMW+su2Qcy9aIEJ2VdlmtlBLUKo=), 
S3 Extended Request ID: 
o9P5GsHwIedAdCsXvpDN6JSmHi5DvV02tW234Es2eIcrGItcEMW+su2Qcy9aIEJ2VdlmtlBLUKo=
        at 
com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:2262)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$11(S3AFileSystem.java:2162)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:2154)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeysS3(S3AFileSystem.java:2440)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:2541)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem$OperationCallbacksImpl.removeKeys(S3AFileSystem.java:1610)
        at 
org.apache.hadoop.fs.s3a.impl.RenameOperation.removeSourceObjects(RenameOperation.java:680)
        ... 21 more

{code}

> s3a rename() now requires s3:deleteObjectVersion permission
> -----------------------------------------------------------
>
>                 Key: HADOOP-17261
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17261
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> With the directory marker change (HADOOP-13230) you need the 
> s3:deleteObjectVersion permission in your role, else the operation will fail 
> in the bulk delete, *if S3Guard is in use*
> Root cause
> -if fileStatus has a versionId, we pass that in to the delete KeyVersion pair
> -an unguarded listing doesn't get that versionId, so this is not an issue
> -but if files in a directory were previously created such that S3Guard has 
> their versionId in its tables, that is used in the request
> -which then fails if the caller doesn't have the permission
> Although we say "you need s3:delete*", this is a regression as any IAM role 
> without the permission will have rename fail during delete



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to