Hi all,

We are currently using Spark version 3.1.1 in our production environment. We 
have noticed that occasionally, after executing 'insert overwrite ... select', 
the resulting data is inconsistent, with some data being duplicated or lost. 
This issue does not occur all the time and seems to be more prevalent on large 
tables with tens of millions of records.
We compared the execution plans for two runs of the same SQL and found that 
they were identical. In the case where the SQL was executed successfully, the 
amount of data being written and read during the shuffle stage was the same. 
However, in the case where the problem occurred, the amount of data being 
written and read during the shuffle stage was different. Please see the 
attached screenshots for the write/read data during shuffle stage.


Normal SQL:
SQL with issues:
Is this problem caused by a bug in version 3.1.1, specifically (SPARK-34534): 
'New protocol FetchShuffleBlocks in OneForOneBlockFetcher lead to data loss or 
correctness'? Or is it caused by something else? What could be the root cause 
of this problem?
Thanks.


FengZhou



Reply via email to