Alexey Kudinkin created HUDI-4992:
-------------------------------------

             Summary: Spark Row-writing Bulk Insert produces incorrect Bloom 
Filter metadata
                 Key: HUDI-4992
                 URL: https://issues.apache.org/jira/browse/HUDI-4992
             Project: Apache Hudi
          Issue Type: Bug
    Affects Versions: 0.12.0
            Reporter: Alexey Kudinkin
            Assignee: Alexey Kudinkin
             Fix For: 0.12.1


Troubleshooting duplicates issue w/ Abhishek Modi from Notion, we've found that 
the min/max record key stats are being currently persisted incorrectly into 
Parquet metadata, leading to duplicate records being produced in their pipeline 
after initial bulk-insert.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to