[
https://issues.apache.org/jira/browse/HUDI-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davis Zhang updated HUDI-8819:
------------------------------
Priority: Major (was: Blocker)
> Hudi 1.0's backward writer's UPDATE/DELETE would corrupt older versioned Hudi
> table
> -----------------------------------------------------------------------------------
>
> Key: HUDI-8819
> URL: https://issues.apache.org/jira/browse/HUDI-8819
> Project: Apache Hudi
> Issue Type: Sub-task
> Affects Versions: 1.0.0
> Reporter: Shawn Chang
> Assignee: Davis Zhang
> Priority: Major
> Fix For: 1.0.1
>
> Time Spent: 7h
> Remaining Estimate: 0h
>
> Reproduction:
> # Create a table with Hudi 0.14 + Spark 3.5.0 with some rows
> # Use Hudi 1.0.0 + Spark 3.5.3 as writer, set
> .option("hoodie.write.table.version", 6) to enable backward writer
>
> # After updating some rows, read with Hudi 1.0.0 + Spark 3.5.3:
> spark.read.format("hudi").load(tablePath)
>
> # The read results from Hudi 1.0.0 + Spark 3.5.3 would only contain updated
> rows
> # Same happens to DELETE, if we delete some rows with Hudi 1.0.0 + Spark
> 3.5.3, then the Spark reader can only see the delete blocks that contain zero
> row
> # Older versioned Hudi reader (Athena) can still see the correct results
--
This message was sent by Atlassian Jira
(v8.20.10#820010)