[ https://issues.apache.org/jira/browse/SPARK-33230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17219705#comment-17219705 ]
Apache Spark commented on SPARK-33230: -------------------------------------- User 'steveloughran' has created a pull request for this issue: https://github.com/apache/spark/pull/30141 > FileOutputWriter to set jobConf "spark.sql.sources.writeJobUUID" to > description.uuid > ------------------------------------------------------------------------------------ > > Key: SPARK-33230 > URL: https://issues.apache.org/jira/browse/SPARK-33230 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 2.4.7, 3.0.1 > Reporter: Steve Loughran > Priority: Minor > > The Hadoop S3A staging committer has problems with >1 spark sql query being > launched simultaneously, as it uses the jobID for its path in the clusterFS > to pass the commit information from tasks to job committer. > If two queries are launched in the same second, they conflict and the output > of job 1 includes that of all job2 files written so far; job 2 will fail with > FNFE. > Proposed: > job conf to set {{"spark.sql.sources.writeJobUUID"}} to the value of > {{WriteJobDescription.uuid}} > That was the property name which used to serve this purpose; any committers > already written which use this property will pick it up without needing any > changes. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org