[ https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570240#comment-16570240 ]
Ewan Higgs commented on HADOOP-15576: ------------------------------------- {quote}legit regression; the rejection of 0-part MPUs is breaking a mock test. {quote} This is checked in {{testCompleteEmptyUpload}} that you've added. AIUI, S3 requires an MPU to have at least 1 part uploaded even if that part is 0 bytes. Is there anything outstanding here? Just the question of whether we want to go full protobuf to avoid Java Serialization? > S3A Multipart Uploader to work with S3Guard and encryption > ----------------------------------------------------------- > > Key: HADOOP-15576 > URL: https://issues.apache.org/jira/browse/HADOOP-15576 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.2 > Reporter: Steve Loughran > Assignee: Ewan Higgs > Priority: Blocker > Attachments: HADOOP-15576-005.patch, HADOOP-15576-007.patch, > HADOOP-15576-008.patch, HADOOP-15576.001.patch, HADOOP-15576.002.patch, > HADOOP-15576.003.patch, HADOOP-15576.004.patch > > > The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with > the tests to demonstrate this > # move from low-level calls of S3A client to calls of WriteOperationHelper; > adding any new methods are needed there. > # Tests. the tests of HDFS-13713. > # test execution, with -DS3Guard, -DAuth > There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and > even if there was, it might not show that S3Guard was bypassed, because > there's no checks that listFiles/listStatus shows the newly committed files. > Similarly, because MPU requests are initiated in S3AMultipartUploader, > encryption settings are't picked up. Files being uploaded this way *are not > being encrypted* -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org