[ https://issues.apache.org/jira/browse/SPARK-35279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17339227#comment-17339227 ]
Apache Spark commented on SPARK-35279: -------------------------------------- User 'SaurabhChawla100' has created a pull request for this issue: https://github.com/apache/spark/pull/32437 > _SUCCESS file not written when using partitionOverwriteMode=dynamic > ------------------------------------------------------------------- > > Key: SPARK-35279 > URL: https://issues.apache.org/jira/browse/SPARK-35279 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 3.1.1 > Reporter: Goran Kljajic > Priority: Minor > > Steps to reproduce: > > {code:java} > case class A(a: String, b:String) > def df = List(A("a", "b")).toDF > spark.conf.set("spark.sql.sources.partitionOverwriteMode", "dynamic") > val writer = df.write.mode(SaveMode.Overwrite).partitionBy("a") > writer.parquet("s3a://some_bucket/test/") > {code} > when spark.sql.sources.partitionOverwriteMode is set to dynamic, the output > written doesn't have _SUCCESS file updated. > (I have checked different versions of hadoop from 3.1.4 to 3.22 and they all > behave the same, so the issue is with spark) > This is working in spark 3.0.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org