[ 
https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16560038#comment-16560038
 ] 

Steve Loughran commented on HADOOP-15576:
-----------------------------------------

bq. This is interesting and worthwhile to know what happens, but should the 
contract be consistent across all storage systems? e.g. S3 has one semantic; 
Azure may have another.

oh, it's going to be very much a store specific. It'd just be interesting to 
know and we could add a new policy to the XML contract options to declare what 
is expected, so at least we've got it documented. 

bq. For the tests where you talk about unknown uploads, what do you mean here? 
The UploadHandle is an opaque handle constructed by the initialize function. So 
if we construct one ourselves (s.t. the MPU doesn't know about it), then are we 
testing malicious intent?

yes. If it's java serialization then it needs to be looked at to make sure it 
defends against malicios stuff.

As it's a bb to string in the s3a one, it should defend against empty strings. 
Other than that, well, the deserialized etag will just be rejected, won't it? 

What may be good is for the implementations to include some string id+version 
in the response, rather than just an etag. I did a lot of this in 
org.apache.hadoop.fs.s3a.commit.files.PersistentCommitData, but that data was 
JSON shared via the FS, so more vulnerable to: maliciousness, manual attempts 
to break or simply mismatched versions. Here? Less of an issue. Just reject the 
empty array as the obvious failure point

> S3A  Multipart Uploader to work with S3Guard and encryption
> -----------------------------------------------------------
>
>                 Key: HADOOP-15576
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15576
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2
>            Reporter: Steve Loughran
>            Assignee: Ewan Higgs
>            Priority: Blocker
>         Attachments: HADOOP-15576.001.patch, HADOOP-15576.002.patch, 
> HADOOP-15576.003.patch
>
>
> The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with 
> the tests to demonstrate this
> # move from low-level calls of S3A client to calls of WriteOperationHelper; 
> adding any new methods are needed there.
> # Tests. the tests of HDFS-13713. 
> # test execution, with -DS3Guard, -DAuth
> There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and 
> even if there was, it might not show that S3Guard was bypassed, because 
> there's no checks that listFiles/listStatus shows the newly committed files.
> Similarly, because MPU requests are initiated in S3AMultipartUploader, 
> encryption settings are't picked up. Files being uploaded this way *are not 
> being encrypted*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to