[ 
https://issues.apache.org/jira/browse/HUDI-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KnightChess reassigned HUDI-4348:
---------------------------------

    Assignee: KnightChess

> merge into will cause data quality in concurrent scene
> ------------------------------------------------------
>
>                 Key: HUDI-4348
>                 URL: https://issues.apache.org/jira/browse/HUDI-4348
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: spark-sql
>            Reporter: KnightChess
>            Assignee: KnightChess
>            Priority: Major
>
> a hudi table with 15 billion pieces of data, the update records has 30 
> million every day, the 1000 records is different with hive table.
>  
> when I set `executor-cores 1` and `spark.task.cpus 1`, there is no problem, 
> but when the parallelism over 1 in every executor, the data quality will 
> appear.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to