[ https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16567945#comment-16567945 ]
Ewan Higgs commented on HADOOP-15576: ------------------------------------- Thanks. The changes mostly look very good. {quote}The rule "1+ handle must have been uploaded" is new, but it stops the MPU complete on S3 failing.{quote} In the javadoc you say it must be >1; but it should be >=1. Or did you see this failing with only a single part being written? {quote} S3A part handles marshall (header, len, etag); unmarshall validates header & extracts len and etag. Unit tests for this. Uses java DataInputStream, nothing fancy. {quote} Should we bite the bullet and make this protobuf? Or is adding a dep on protobuf and adding a protoc step to hadoop-aws ott? {quote}Big issue there: what would this mean for a distcp working this way? I'd propose: 0-byte files get treated as special, or at least there's a requirement for a 0-byte upload. Which, if supported, is something else to test for.{quote} Agreed. > S3A Multipart Uploader to work with S3Guard and encryption > ----------------------------------------------------------- > > Key: HADOOP-15576 > URL: https://issues.apache.org/jira/browse/HADOOP-15576 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.2 > Reporter: Steve Loughran > Assignee: Ewan Higgs > Priority: Blocker > Attachments: HADOOP-15576-005.patch, HADOOP-15576-007.patch, > HADOOP-15576.001.patch, HADOOP-15576.002.patch, HADOOP-15576.003.patch, > HADOOP-15576.004.patch > > > The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with > the tests to demonstrate this > # move from low-level calls of S3A client to calls of WriteOperationHelper; > adding any new methods are needed there. > # Tests. the tests of HDFS-13713. > # test execution, with -DS3Guard, -DAuth > There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and > even if there was, it might not show that S3Guard was bypassed, because > there's no checks that listFiles/listStatus shows the newly committed files. > Similarly, because MPU requests are initiated in S3AMultipartUploader, > encryption settings are't picked up. Files being uploaded this way *are not > being encrypted* -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org