[ 
https://issues.apache.org/jira/browse/SPARK-23977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17437259#comment-17437259
 ] 

Steve Loughran commented on SPARK-23977:
----------------------------------------

[~gumartinm] can I draw your attention to Apache Iceberg?

meanwhile
MAPREDUCE-7341 adds a high performance targeting abfs and gcs; all tree 
scanning is in task commit, which is atomic; job commit aggressively 
parallelised and optimized for stores whose listStatusIterator calls are 
incremental with prefetching: we can start processing at the first page of task 
manifests found in a listing well the second Page is still being retrieved. 
Also in there: rate limiting, IO Statistics Collection.

HADOOP-17833 I will pick up some of that work, including incremental loading 
and rate limiting. And if we can keep reads and writes below the S3 IOPS 
limits, we should be able to avoid situations where we have to start sleeping 
and re-trying.

HADOOP-17981 is my homework this week -emergency work to deal with a rare but 
current failure in abfs under heavy load.


> Add commit protocol binding to Hadoop 3.1 PathOutputCommitter mechanism
> -----------------------------------------------------------------------
>
>                 Key: SPARK-23977
>                 URL: https://issues.apache.org/jira/browse/SPARK-23977
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>             Fix For: 3.0.0
>
>
> Hadoop 3.1 adds a mechanism for job-specific and store-specific committers 
> (MAPREDUCE-6823, MAPREDUCE-6956), and one key implementation, S3A committers, 
> HADOOP-13786
> These committers deliver high-performance output of MR and spark jobs to S3, 
> and offer the key semantics which Spark depends on: no visible output until 
> job commit, a failure of a task at an stage, including partway through task 
> commit, can be handled by executing and committing another task attempt. 
> In contrast, the FileOutputFormat commit algorithms on S3 have issues:
> * Awful performance because files are copied by rename
> * FileOutputFormat v1: weak task commit failure recovery semantics as the 
> (v1) expectation: "directory renames are atomic" doesn't hold.
> * S3 metadata eventual consistency can cause rename to miss files or fail 
> entirely (SPARK-15849)
> Note also that FileOutputFormat "v2" commit algorithm doesn't offer any of 
> the commit semantics w.r.t observability of or recovery from task commit 
> failure, on any filesystem.
> The S3A committers address these by way of uploading all data to the 
> destination through multipart uploads, uploads which are only completed in 
> job commit.
> The new {{PathOutputCommitter}} factory mechanism allows applications to work 
> with the S3A committers and any other, by adding a plugin mechanism into the 
> MRv2 FileOutputFormat class, where it job config and filesystem configuration 
> options can dynamically choose the output committer.
> Spark can use these with some binding classes to 
> # Add a subclass of {{HadoopMapReduceCommitProtocol}} which uses the MRv2 
> classes and {{PathOutputCommitterFactory}} to create the committers.
> # Add a {{BindingParquetOutputCommitter extends ParquetOutputCommitter}}
> to wire up Parquet output even when code requires the committer to be a 
> subclass of {{ParquetOutputCommitter}}
> This patch builds on SPARK-23807 for setting up the dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to