[ 
https://issues.apache.org/jira/browse/HADOOP-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559857#comment-16559857
 ] 

Ewan Higgs commented on HADOOP-15576:
-------------------------------------

003
- Added Test - second complete throws IOException
- Added Test abort unknown upload is a no-op.

{quote}
initiate two MPUs to the same dest. What happens? (interesting q; outcome may 
vary on FS)
{quote}
This is interesting and worthwhile to know what happens, but should the 
contract be consistent across all storage systems? e.g. S3 has one semantic; 
Azure may have another.

For the tests where you talk about unknown uploads, what do you mean here? The 
UploadHandle is an opaque handle constructed by the initialize function. So if 
we construct one ourselves (s.t. the MPU doesn't know about it), then are we 
testing malicious intent? That's a bit weird since it's just wrapping calls to 
the underlying FS so we're only exploiting a subset of behaviour in the 
underlying FS.

Anyway, I think that 003 is the first one worth considering committing.

> S3A  Multipart Uploader to work with S3Guard and encryption
> -----------------------------------------------------------
>
>                 Key: HADOOP-15576
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15576
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2
>            Reporter: Steve Loughran
>            Assignee: Ewan Higgs
>            Priority: Blocker
>         Attachments: HADOOP-15576.001.patch, HADOOP-15576.002.patch, 
> HADOOP-15576.003.patch
>
>
> The new Multipart Uploader API of HDFS-13186 needs to work with S3Guard, with 
> the tests to demonstrate this
> # move from low-level calls of S3A client to calls of WriteOperationHelper; 
> adding any new methods are needed there.
> # Tests. the tests of HDFS-13713. 
> # test execution, with -DS3Guard, -DAuth
> There isn't an S3A version of {{AbstractSystemMultipartUploaderTest}}, and 
> even if there was, it might not show that S3Guard was bypassed, because 
> there's no checks that listFiles/listStatus shows the newly committed files.
> Similarly, because MPU requests are initiated in S3AMultipartUploader, 
> encryption settings are't picked up. Files being uploaded this way *are not 
> being encrypted*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to