[ https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151246#comment-14151246 ]
Hadoop QA commented on HADOOP-10714: ------------------------------------ {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671719/HADOOP-10714-007.patch against trunk revision 400e1bb. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 15 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws: org.apache.hadoop.crypto.random.TestOsSecureRandom {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4822//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4822//console This message is automatically generated. > AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call > -------------------------------------------------------------------------- > > Key: HADOOP-10714 > URL: https://issues.apache.org/jira/browse/HADOOP-10714 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Affects Versions: 2.5.0 > Reporter: David S. Wang > Assignee: Juan Yu > Priority: Critical > Labels: s3 > Attachments: HADOOP-10714-007.patch, HADOOP-10714-1.patch, > HADOOP-10714.001.patch, HADOOP-10714.002.patch, HADOOP-10714.003.patch, > HADOOP-10714.004.patch, HADOOP-10714.005.patch, HADOOP-10714.006.patch > > > In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need > to have the number of entries at 1000 or below. Otherwise we get a Malformed > XML error similar to: > com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS > Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: > MalformedXML, AWS Error Message: The XML you provided was not well-formed or > did not validate against our published schema, S3 Extended Request ID: > DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v > at > com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) > at > com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) > at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) > at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528) > at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480) > at > com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739) > at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878) > Note that this is mentioned in the AWS documentation: > http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html > "The Multi-Object Delete request contains a list of up to 1000 keys that you > want to delete. In the XML, you provide the object key names, and optionally, > version IDs if you want to delete a specific version of the object from a > versioning-enabled bucket. For each key, Amazon S3….” > Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the > problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)