[ https://issues.apache.org/jira/browse/SPARK-33230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran updated SPARK-33230: ----------------------------------- Summary: FileOutputWriter jobs have duplicate JobIDs if launched in same second (was: FileOutputWriter to set jobConf "spark.sql.sources.writeJobUUID" to description.uuid) > FileOutputWriter jobs have duplicate JobIDs if launched in same second > ---------------------------------------------------------------------- > > Key: SPARK-33230 > URL: https://issues.apache.org/jira/browse/SPARK-33230 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.4.7, 3.0.1 > Reporter: Steve Loughran > Priority: Major > > The Hadoop S3A staging committer has problems with >1 spark sql query being > launched simultaneously, as it uses the jobID for its path in the clusterFS > to pass the commit information from tasks to job committer. > If two queries are launched in the same second, they conflict and the output > of job 1 includes that of all job2 files written so far; job 2 will fail with > FNFE. > Proposed: > job conf to set {{"spark.sql.sources.writeJobUUID"}} to the value of > {{WriteJobDescription.uuid}} > That was the property name which used to serve this purpose; any committers > already written which use this property will pick it up without needing any > changes. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org