bogthe commented on pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#issuecomment-841812391
> Had merge conflicts so had to force push. > Tests: > > ``` > [ERROR] Tests run: 1430, Failures: 1, Errors: 34, Skipped: 538 > ``` > > Scale: > > ``` > [ERROR] Tests run: 151, Failures: 3, Errors: 21, Skipped: 29 > ``` > > Most errors are MultiPart upload related: > > ``` > com.amazonaws.SdkClientException: Invalid part size: part sizes for encrypted multipart uploads must be multiples of the cipher block size (16) with the exception of the last part. > ``` > > Simply adding 16(Padding length) to multipart upload block size won't work. The part sizes need to be a multiple of 16, so it has that restriction for CSE. Also, one more thing to note here is that it assumes the last part to be an exception, which makes me believe that multipart upload in CSE has to be sequential(or can we parallel upload the starting parts and then upload the last part?)? So, potentially another constraint while uploading could have performance impacts here apart from the HEAD calls being required while downloading/listing. > @steveloughran Hi @mehakmeet , regarding multipart uploads. The last part is always an exception with regular multi part uploads too! You can do parallel uploads and even upload the last part first and it would still work (for regular multi-part). My assumption is that for multi part uploads with CSE enabled the same functionality holds (except for cipher block size, but the minimum part size for regular multi-part is 5MB = 5 * 1024 * 1024 which is still a multiple of 16 :D ). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org