lamber-ken edited a comment on issue #1552: URL: https://github.com/apache/incubator-hudi/issues/1552#issuecomment-617976874
hi @harshi2506, based on the above analysis, please try to increate the upsert parallelism(`hoodie.upsert.shuffle.parallelism`) and spark executor instances, for example ``` export SPARK_HOME=/work/BigData/install/spark/spark-2.4.4-bin-hadoop2.7 ${SPARK_HOME}/bin/spark-shell \ --master yarn \ --driver-memory 6G \ --num-executors 10 \ --executor-cores 5 \ --packages org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4 \ --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' import org.apache.spark.sql.functions._ val tableName = "hudi_mor_table" val basePath = "file:///tmp/hudi_mor_tablen" val hudiOptions = Map[String,String]( "hoodie.upsert.shuffle.parallelism" -> "200", "hoodie.datasource.write.recordkey.field" -> "id", "hoodie.datasource.write.partitionpath.field" -> "key", "hoodie.table.name" -> tableName, "hoodie.datasource.write.precombine.field" -> "timestamp", "hoodie.memory.merge.max.size" -> "2004857600000" ) val inputDF = spark.range(1, 300). withColumn("key", $"id"). withColumn("data", lit("data")). withColumn("timestamp", current_timestamp()). withColumn("dt", date_format($"timestamp", "yyyy-MM-dd")) inputDF.write.format("org.apache.hudi"). options(hudiOptions). mode("Append"). save(basePath) spark.read.format("org.apache.hudi").load(basePath + "/*/*").show(); ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org