[ https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17736500#comment-17736500 ]
ASF GitHub Bot commented on HADOOP-18706: ----------------------------------------- cbevard1 opened a new pull request, #5771: URL: https://github.com/apache/hadoop/pull/5771 <!-- Thanks for sending a pull request! 1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute 2. Make sure your PR title starts with JIRA issue id, e.g., 'HADOOP-17799. Your PR title ...'. --> ### Description of PR This PR improves the ability to recovery partial S3A uploads. Changed the handleSyncableInvocation() to call flush() after warning that the syncable API isn't supported. This mirrors the downgradeSyncable behavior of BufferedIOStatisticsOutputStream and RawLocalFileSystem. Changed the DiskBlock temporary file names to include the S3 key to allow partial uploads to be recovered. ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Improve S3ABlockOutputStream recovery > ------------------------------------- > > Key: HADOOP-18706 > URL: https://issues.apache.org/jira/browse/HADOOP-18706 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 > Reporter: Chris Bevard > Assignee: Chris Bevard > Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > If an application crashes during an S3ABlockOutputStream upload, it's > possible to complete the upload if fast.upload.buffer is set to disk by > uploading the s3ablock file with putObject as the final part of the multipart > upload. If the application has multiple uploads running in parallel though > and they're on the same part number when the application fails, then there is > no way to determine which file belongs to which object, and recovery of > either upload is impossible. > If the temporary file name for disk buffering included the s3 key, then every > partial upload would be recoverable. > h3. Important disclaimer > This change does not directly add the Syncable semantics which applications > that require {{Syncable.hsync()}} to only return after all pending data has > been durably written to the destination path. S3 is not a filesystem and this > change does not make it so. > What is does do is assist anyone trying to implement some post-crash recovery > process which > # interrogates s3 to identofy pending uploads to a specific path and get a > list of uploaded blocks yet to be committed > # scans the local fs.s3a.buffer dir directories to identify in-progress-write > blocks for the same target destination. That is those which were being > uploaded, queued for uploaded and the single "new data being written to" > block for an output stream > # uploads all those pending blocks > # generates a new POST to complete a multipart upload with all the blocks in > the correct order > All this patch does is ensure the buffered block filenames include the final > path and block ID, to aid in identify which blocks need to be uploaded and > what order. > h2. warning > causes HADOOP-18744 -always include the relevant fix when backporting -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org