[ 
https://issues.apache.org/jira/browse/SPARK-38445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517653#comment-17517653
 ] 

Martin Andersson edited comment on SPARK-38445 at 4/5/22 7:04 PM:
------------------------------------------------------------------

{quote}not suppoorted unless you provide the PR for a new committer.
{quote}
The impression I've gotten after this posting this (mostly from 
[https://stackoverflow.com/questions/71393013/are-hadoop-committers-used-in-spark-structured-streaming)]
 is that committers are not used at all for spark (structured) streaming.

The documentation here is *really* lacking. Checkpointing is barely described, 
and it's not clear if checkpointing uses hadoop file committers, or if it's a 
completely separate thing. If you want people to contribute to an open source 
project, then it should at least be visible and somewhat obvious when a feature 
is lacking, such as in this case.

EDIT: I now realize you're the kind soul who replied to my question on 
stackoverflow a while back. Thank you.


was (Author: beregon87):
{quote}not suppoorted unless you provide the PR for a new committer.
{quote}
The impression I've gotten after this posting this (mostly from 
[https://stackoverflow.com/questions/71393013/are-hadoop-committers-used-in-spark-structured-streaming)]
 is that committers are not used at all for spark (structured) streaming.

The documentation here is *really* lacking. Checkpointing is barely described, 
and it's not clear if checkpointing uses hadoop file committers, or if it's a 
completely separate thing. If you want people to contribute to an open source 
project, then it should at least be visible and somewhat obvious when a feature 
is lacking, such as in this case.

> Are hadoop committers used in Structured Streaming?
> ---------------------------------------------------
>
>                 Key: SPARK-38445
>                 URL: https://issues.apache.org/jira/browse/SPARK-38445
>             Project: Spark
>          Issue Type: Question
>          Components: Spark Core
>    Affects Versions: 3.2.1
>            Reporter: Martin Andersson
>            Priority: Major
>              Labels: structured-streaming
>
> At the company I work at we're using Spark Structured Streaming to sink 
> messages on kafka to HDFS. We're in the late stages of migrating this 
> component to instead sink messages to AWS S3, and in connection with that we 
> hit upon a couple of issues regarding hadoop committers.
> I've come to understand that the default "file" committer (documented 
> [here|https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/committers.html#Switching_to_an_S3A_Committer])
>  is unsafe to use in S3, which is why [this page in the spark 
> documentation|https://spark.apache.org/docs/3.2.1/cloud-integration.html] 
> recommends using the "directory" (i.e. staging) committer, and in later 
> versions of hadoop they also recommend to use the "magic" committer.
> However, it's not clear whether spark structured streaming even use 
> committers. There's no "_SUCCESS" file in destination (as compared to normal 
> spark jobs), and the documentation regarding committers used in streaming is 
> non-existent.
> Can anyone please shed some light on this?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to