RussellSpitzer commented on a change in pull request #3432: URL: https://github.com/apache/iceberg/pull/3432#discussion_r741490740
########## File path: site/docs/cow-and-mor.md ########## @@ -0,0 +1,195 @@ +<!-- + - Licensed to the Apache Software Foundation (ASF) under one or more + - contributor license agreements. See the NOTICE file distributed with + - this work for additional information regarding copyright ownership. + - The ASF licenses this file to You under the Apache License, Version 2.0 + - (the "License"); you may not use this file except in compliance with + - the License. You may obtain a copy of the License at + - + - http://www.apache.org/licenses/LICENSE-2.0 + - + - Unless required by applicable law or agreed to in writing, software + - distributed under the License is distributed on an "AS IS" BASIS, + - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + - See the License for the specific language governing permissions and + - limitations under the License. + --> + +# Copy-on-Write and Merge-on-Read + +This page explains the concept of copy-on-write and merge-on-read in the context of Iceberg to provide readers more clarity around Iceberg's table spec design. + +## Introduction + +In Iceberg, copy-on-write and merge-on-read are different ways to handle row-level update and delete operations. Here are their definitions: + +- **copy-on-write (CoW)**: an update/delete directly rewrites the entire affected data files. +- **merge-on-read (MoR)**: update/delete information is encoded in the form of delete files. The table reader can apply all delete information at read time. A compaction process takes care of merging delete files into data files asynchronously. + +Clearly, CoW is more efficient in reading data, but MoR is more efficient in writing data. +Users can choose to use **BOTH** CoW and MoR against the same Iceberg table based on different situations. +A common example is that, for a time-partitioned table, newer partitions with more frequent updates are maintained in the MoR approach through a CDC streaming pipeline, +and older partitions are maintained in the CoW way with less frequent GDPR updates from batch ETL jobs. + +## Copy-on-write + +As the definition states, given a user's update/delete requirement, the CoW write process would search for all the affected data files and perform rewrite. +Spark supports CoW `DELETE`, `UPDATE` and `MERGE` operations through Spark extensions. More details can be found in [Spark Writes](../spark-writes) page. + +## Merge-on-read + +In the next few sections, we provide more details around the Iceberg MoR design. + +### Row-Level Delete File Spec + +As documented in the [Spec](../spec/#row-level-deletes) page, Iceberg supports 2 different types of row-level delete files: **position deletes** and **equality deletes**. +If you are unfamiliar with these concepts, please read the related sections in the spec for more information before proceeding. + +Also note that because row-level delete files are valid Iceberg data files, each file must define the partition it belongs to. +If the file belongs to `Unpartitioned` (the partition spec has no partition field), then the delete file is called a **global delete**. +Otherwise, it is called a **partition delete**. + +### MoR Update as Delete + Insert + +In Iceberg, update is modeled as a delete with an insert within the same transaction, so there is no concept of an "update file". +During a MoR write transaction, new data files and delete files are committed with the same sequence number. Review comment: The concept of sequence number is used here for the first time in this doc. Probably needs an explanation. For this section I think I would probably elaborate at the beginning, something like An update in Merge on Read mode consists of two sets of files, Deletes and Inserts. Delete files are created to mark all existing data rows which have been updated as having been deleted in their original data files. Insert files are normal Iceberg data files consisting of the new updated rows. On read, the delete files cause the original records to not appear and only the new updated rows will appear. When creating a delete file it is associated with a "sequence number" which increases with every operation. Delete files can only apply to data files written with an earlier sequence number than they are written with preventing a delete file from modifying future data. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
