Hi iceberg-dev, I tried v2 row-level deletion by committing equality delete files after *upgradeToFormatVersion(2)*. It worked well. I know that Spark actions to compact delete files and data files <https://github.com/apache/iceberg/milestone/4> etc. are in progress. I currently use the JAVA API to update, query and do maintenance ops. I am not using Flink at the moment and I will definitely pick up Spark actions when they are completed. Deletions can be scheduled in batches (e.g. weekly) to control the volume of delete files. I want to get a sense of the risk level of losing data at some point because of v2 Spec/API changes if I start to use v2 format now. It is not an easy question. Any input is appreciated.
-- Huadong