[ https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16624835#comment-16624835 ]
Steve Loughran commented on HDFS-13713: --------------------------------------- BTW, test run is taking 100s for me: too long {code} [INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.906 s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader {code} I'm going to to strip it down * downgrade single part upload test to uploading a few tens of bytes. * remove reverse test, leaving only reverse non-contiguous * test of abort to upload fewer blocks first > Add specification of Multipart Upload API to FS specification, with contract > tests > ---------------------------------------------------------------------------------- > > Key: HDFS-13713 > URL: https://issues.apache.org/jira/browse/HDFS-13713 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, test > Affects Versions: 3.2.0 > Reporter: Steve Loughran > Assignee: Ewan Higgs > Priority: Blocker > Attachments: HADOOP-13713-004.patch, HADOOP-13713-004.patch, > HADOOP-13713-005.patch, HDFS-13713.001.patch, HDFS-13713.002.patch, > HDFS-13713.003.patch, multipartuploader.md > > > There's nothing in the FS spec covering the new API. Add it in a new .md file > * add FS model with the notion of a function mapping (uploadID -> Upload), > the operations (list, commit, abort). The [TLA+ > mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l > of HADOOP-13786 shows how to do this. > * Contract tests of not just the successful path, but all the invalid ones. > * implementations of the contract tests of all FSs which support the new API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org