Anusha Buchireddygari created SPARK-26828: ---------------------------------------------
Summary: Coalesce to reduce partitions before writing to hive is not working Key: SPARK-26828 URL: https://issues.apache.org/jira/browse/SPARK-26828 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 2.3.0 Reporter: Anusha Buchireddygari final_store.coalesce(5).write.mode("overwrite").insertInto("database.tablename",overwrite = True), this statement is not merging partitions. I've set .config("spark.default.parallelism", "2000") \ .config("spark.sql.shuffle.partitions", "2000") \ however repartition is working but takes 20-25 minutes to insert. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org