steveloughran opened a new pull request, #6825:
URL: https://github.com/apache/hadoop/pull/6825

   Improve task commit resilience everywhere
   and add an option to reduce delete IO requests on
   job cleanup (relevant for ABFS and HDFS).
   
   Task Commit Resilience
   ----------------------
   
   Task manifest saving is re-attempted on failure; the number of  attempts 
made is configurable with the option:
   
     mapreduce.manifest.committer.manifest.save.attempts
   
   * The default is 5.
   * The minimum is 1; asking for less is ignored.
   * A retry policy adds 500ms of sleep per attempt.
   * Move from classic rename() to commitFile() to rename the file, after 
calling getFileStatus() to get its length and possibly etag. This becomes a 
rename() on gcs/hdfs anyway, but on abfs it does reach the 
ResilientCommitByRename callbacks in abfs, which report on the outcome to the 
caller...which is then logged at WARN.
   * New statistic task_stage_save_summary_file to distinguish from other 
saving operations (job success/report file). This is only saved to the manifest 
on task commit retries, and provides statistics on all previous unsuccessful 
attempts to save the manifests
   + test changes to match the codepath changes, including improvements in 
fault injection.
   
   Directory size for deletion
   ---------------------------
   
   New option
   
     mapreduce.manifest.committer.cleanup.parallel.delete.base.first
   
   This attempts an initial attempt at deleting the base dir, only falling back 
to parallel deletes if there's a timeout.
   
   This option is disabled by default; Consider enabling it for abfs to reduce 
IO load. Consult the documentation for more details.
   
   Success file printing
   ---------------------
   
   The command to print a JSON _SUCCESS file from this committer and any S3A 
committer is now something which can be invoked from the mapred command:
   
     mapred successfile <path to file>
   
   Contributed by Steve Loughran
   
   
   
   ### How was this patch tested?
   
   yetus's work, if happy will validate on abfs.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to