Hi there, As far as I know, when *spark.sql.adaptive.enabled* is set to true, the number of post shuffle partitions should change with the map output size. But in my application there is a stage reading 900GB shuffled files only with 200 partitions (which is the default number of *spark.sql.shuffle.partitions*), and I verified that the number of post shuffle partitions if always the same as the value of spark.sql.shuffle.partitions. Additionally I leave the value of *spark.sql.adaptive**.shuffle.targetPostShuffleInputSize* by default. Is there any mistake I've made and what's the correct behavior?
Thanks