[ https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16603599#comment-16603599 ]
Steve Loughran commented on HDFS-13713: --------------------------------------- OK. h3. * parent.isDirectory() * Caller gets to do any mkdirs, so they can avoid calling it on every single upload of many files; MPU impl just makes sure that it is there. h3. commit: * dest path is not a dir. * Skip checking the parent, at least on S3. I think that balances out efficiency with making the best of the state of a store. > Add specification of Multipart Upload API to FS specification, with contract > tests > ---------------------------------------------------------------------------------- > > Key: HDFS-13713 > URL: https://issues.apache.org/jira/browse/HDFS-13713 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, test > Affects Versions: 3.2.0 > Reporter: Steve Loughran > Assignee: Ewan Higgs > Priority: Blocker > Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, > multipartuploader.md > > > There's nothing in the FS spec covering the new API. Add it in a new .md file > * add FS model with the notion of a function mapping (uploadID -> Upload), > the operations (list, commit, abort). The [TLA+ > mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l > of HADOOP-13786 shows how to do this. > * Contract tests of not just the successful path, but all the invalid ones. > * implementations of the contract tests of all FSs which support the new API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org