Hi folks,
We have been following this doc
<https://spark.apache.org/docs/3.5.1/cloud-integration.html#hadoop-s3a-committers>
for writing data from Spark Job to S3. However it fails writing to dynamic
partitions. Any suggestions on what config should be used to avoid the cost
of renaming in S3?

Thanks
Nikhil

Reply via email to