[ 
https://issues.apache.org/jira/browse/HUDI-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Chang updated HUDI-8819:
------------------------------
    Affects Version/s: 1.0.0

> Hudi 1.0's backward writer's UPDATE/DELETE would corrupt older versioned Hudi 
> table
> -----------------------------------------------------------------------------------
>
>                 Key: HUDI-8819
>                 URL: https://issues.apache.org/jira/browse/HUDI-8819
>             Project: Apache Hudi
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: Shawn Chang
>            Priority: Major
>
> Reproduction:
>  # Create a table with Hudi 0.14 + Spark 3.5.0 with some rows
>  # Use Hudi 1.0.0 + Spark 3.5.3 as writer, set 
> .option("hoodie.write.table.version", 6) to enable backward writer
>  
>  # After updating some rows, read with Hudi 1.0.0 + Spark 3.5.3: 
> spark.read.format("hudi").load(tablePath)
>  
>  # The read results from Hudi 1.0.0 + Spark 3.5.3 would only contain updated 
> rows
>  # Same happens to DELETE, if we delete some rows with Hudi 1.0.0 + Spark 
> 3.5.3, then the Spark reader can only see the delete blocks that contain zero 
> row
>  # Older versioned Hudi reader (Athena) can still see the correct results 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to