danny0405 commented on code in PR #11514:
URL: https://github.com/apache/hudi/pull/11514#discussion_r1675305938


##########
rfc/rfc-78/rfc-78.md:
##########
@@ -0,0 +1,339 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# RFC-76: [Bridge release for 1.x]
+
+## Proposers
+
+- @nsivabalan
+- @vbalaji
+
+## Approvers
+ - @yihua
+ - @codope
+
+## Status
+
+JIRA: https://issues.apache.org/jira/browse/HUDI-7882
+
+> Please keep the status updated in `rfc/README.md`.
+
+## Abstract
+
+[Hudi 
1.x](https://github.com/apache/hudi/blob/ae1ee05ab8c2bd732e57bee11c8748926b05ec4b/rfc/rfc-69/rfc-69.md)
 is a powerful 
+re-imagination of the transactional database layer in Hudi to power continued 
innovation across the community in the coming 
+years. It introduces lot of differentiating features for Apache Hudi. Feel 
free to checkout the 
+[release page](https://hudi.apache.org/releases/release-1.0.0-beta1) for more 
info. We had beta1 and beta2 releases which was meant for 
+interested developers/users to give a spin on some of the  advanced features. 
But as we are working towards 1.0 GA, we are proposing 
+a bridge release (0.16.0) for smoother migration for existing hudi users. 
+
+## Objectives 
+Goal is to have a smooth migration experience for the users from 0.x to 1.0. 
We plan to have a 0.16.0 bridge release asking everyone to first migrate to 
0.16.0 before they can upgrade to 1.x. 
+
+A typical organization might have a medallion architecture deployed to run 
1000s of Hudi pipelines i.e. bronze, silver and gold layer. 
+For this layout of pipelines, here is how a typical migration might look 
like(w/o a bridge release)
+
+a. Existing pipelines are in 0.15.x. (bronze, silver, gold) 
+b. Migrate gold pipelines to 1.x. 
+- We need to strictly migrate only gold to 1x. Bcoz, a 0.15.0 reader may not 
be able to read 1.x hudi tables. So, if we migrate any of silver pipelines to 
1.x before migrating entire gold layer, we might end up in a situation, 
+where a 0.15.0 reader (gold) might end up reading 1.x table (silver). This 
might lead to failures. So, we have to follow certain order in which we migrate 
pipelines. 
+c. Once all of gold is migrated to 1.x, we can move all of silver to 1.x. 
+d. Once all of gold and silver pipelines are migrated to 1.x, finally we can 
move all of bronze to 1.x.
+
+In the end, we would have migrated all of existing hudi pipelines from 0.15.0 
to 1.x. 
+But as you could see, we need some coordination with which we need to migrate. 
And in a very large organization, sometimes we may not have good control over 
downstream consumers. 
+Hence, coordinating entire migration workflow and orchestrating the same might 
be challenging.
+
+Hence to ease the migration workflow for 1.x, we are introducing 0.16.0 as a 
bridge release.  
+
+Here are the objectives with this bridge release:
+
+- 1.x reader should be able to read 0.14.x to 0.16.x tables w/o any loss in 
functionality and no data inconsistencies.
+- 0.16.x should have read capability for 1.x tables w/ some limitations. For 
features ported over from 0.x, no loss in functionality should be guaranteed. 
+But for new features that was introduced in 1.x, we may not be able to support 
all of them. Will be calling out which new features may not work with 0.16.x 
reader. 
+- In this case, we explicitly request users to not turn on these features 
untill all readers are completely migrated to 1.x so as to not break any 
readers as applicable. 
+
+Connecting back to our example above, lets see how the migration might look 
like for an existing user. 
+
+a. Existing pipelines are in 0.15.x. (bronze, silver, gold)
+b. Migrate pipelines to 0.16.0 (in any order. we do not have any constraints 
around which pipeline should be migrated first). 
+c. Ensure all pipelines are in 0.16.0 (both readers and writers)
+d. Start migrating pipelines in a rolling fashion to 1.x. At this juncture, we 
could have few pipelines in 1.x and few pipelines in 0.16.0. but since 0.16.x 
+can read 1.x tables, we should be ok here. Just that do not enable new 
features like Non blocking concurrency control yet. 
+e. Migrate all of 0.16.0 to 1.x version. 
+f. Once all readers and writers are in 1.x, we are good to enable any new 
features (like NBCC) with 1.x tables.
+
+As you could see, company/org wide coordination to migrate gold before 
migrating silver or bronze is relaxed with the bridge release. Only requirement 
to keep a tab on, 
+is to ensure to migrate all pipelines completely to 0.16.x before starting to 
migrate to 1.x.
+
+So, here are the objectives of this RFC with the bridge release. 
+- 1.x reader should be able to read 0.14.x to 0.16.x tables w/o any loss in 
functionality and no data inconsistencies.
+- 0.16.x should have read capability for 1.x tables w/ some limitations. For 
features ported over from 0.x, no loss in functionality should be guaranteed.
+  But for new features that are being introduced in 1.x, we may not be able to 
support all of them. Will be calling out which new features may not work with 
0.16.x reader.
+- Document steps for rolling upgrade from 0.16.x to 1.x with minimal downtime.
+- Downgrade from 1.x to 0.16.x documented with call outs on any functionality 
loss. 
+
+### Considerations when choosing Migration strategy
+- While migration is happening, we want to allow readers to continue reading 
data. This means, we cannot employ a stop-the-world strategy when we are 
migrating. 
+All the actions that we are performing as part of table upgrade should not 
have any side-effects of breaking snapshot isolation for readers.
+- Also, users should have migrated to 0.16.x before upgrading to 1.x. We do 
not want to add read support for very old versions of hudi in 1.x(for eg 
0.7.0). 
+- So, in an effort to bring everyone to latest hudi versions, 1.x reader will 
have full read capabilities for 0.16.x, but for older hudi versions, 1.x reader 
may not have full reader support. 
+The reocmmended guideline is to upgrade all readers and writers to 0.16.x. and 
then slowly start upgrading to 1.x(readers followed by writers). 
+
+Before we dive in further, lets understand the format changes:
+
+## Format changes
+### Table properties
+- Payload class ➝ payload type.
+- hoodie.record.merge.mode is introduced in 1.x. 
+- New metadata partitions could be added (optionally enabled)
+
+### MDT changes
+- New MDT partitions are available in 1.x. MDT schema upgraded.
+- RLI schema is upgraded to hold row positions.
+
+### Timeline:
+- [storage changes] Completed write commits have completed times in the file 
name(timeline commit files).
+- [storage changes] Completed and inflight write commits are in avro format 
which were json in 0.x.
+- We are switching the action type for pending clustering from “replace 
commit” to “cluster”.
+- [storage changes] Timeline ➝ LST timeline. There is no archived timeline in 
1.x.
+- [In-memory changes] HoodieInstant changes due to presence of completion time 
for completed HoodieInstants.
+
+### Filegroup/FileSlice changes:
+- Log file names contain delta commit time instead of base instant time.
+- Log appends are disabled in 1.x. In other words, each log block is already 
appended to a new log file.
+- File Slice determination logic for log files changed. In 0.x, we have base 
instant time in log files and its straight forward. 
+In 1.x, we find completion time for a log file and find the base instant time 
(parsed from base files) for a given HoodieFileGroup, 
+- which has the highest value lesser than the completion time of the log file) 
of interest. 
+- Log file ordering within a file slice. (in 0.x, we use base instant time ➝ 
log file versions ➝ write token) to order diff log files. in 1.x, we will be 
using completion time to order).
+- Rollbacks in 0.x appends a new rollback block (new log file). While in 1.x, 
rollback will remove the partially failed log files. 
+
+### Log format changes:
+- We have added a new header type, IS_PARTIAL in 1.x.
+
+## Changes to be ported over to 0.16.x to support reading 1.x tables
+
+### What will be supported
+- For features introduced in 0.x, and tables written in 1.x, 0.16.0 reader 
should be able to provide consistent reads w/o any breakage. 
+
+### What will not be supported
+- A 0.16 writer cannot write to a table that has been upgraded-to/created 
using 1.x without downgrading to 0.16. Might be obvious, but calling it out 
nevertheless.
+- For new features introduced in 1.x, we may or may not have full support with 
0.16.x reader. 
+
+| 1.x Features written by 1.x writer        | 1.x reader   | 0.16.x Reader | 
+|----------------|-------------------------------------------|-------|
+| Deletion vector | supported | Falls back to key based merges giving up on 
perf optimization
+| Partial merges/updates  | supported | Fails with clear error message stating 
that partial merging |
+| Functional indexes | supported | Not supported. Perf optimization may not 
kick in|
+| Secondary indexes | supported | Not supported. Perf optimization may not 
kick in|
+| NBCC OR Completion time based log file ordering in a file slice | supported 
| Not supported. Will be using log file version and write token based 
ordering.|  
+
+
+### Timeline
+- Commit instants w/ completion time should be readable. HoodieInstant parsing 
logic to parse completion time should be ported over.
+- Commit metadata in avro instead of json should be readable.
+   - More details on this under Implementation section.
+- Pending Clustering commits using “cluster” action should be readable in 
0.16.0 reader.
+- HoodieDefaultTimeline should be able to support both 0.x timeline and 1.x 
timeline.
+   - More details on this under Implementation section.
+- Should we port LSM reader as well to 0.16.x? Our goal here is to support 
snapshot, time travel and incremental queries for 1.x tables. Strictly speaking 
we can only do all these 3 queries in uncleaned 
+instants i.e. over instant ranges where cleaner has not been executed. So, if 
we guarantee that 1.x active timeline will definitely contain all the uncleaned 
instants, we could get away by not even porting over 
+LSM timeline reader logic to 0.16.0. 
+
+#### Completion time based read in FileGroup/FileSlice grouping and log block 
reads
+- As of this write up, we are not supporting completion time based log file 
ordering in in 0.16. Punting it as called out earlier.
+- What's the impact if not for this support:
+    - For users having single writer mode or OCC in 1.x, 0.16.0 reader not 
supporting completion time based read(log file ordering) should still be ok. 
Both 1.x reader and 0.16.0 reader should have same behavior.
+    - But only if someone has NBCC writes and have log files in different 
ordering written compared to the log file versions, 0.16.0 reader might result 
in data consistency issues. but since we are calling out that 0.16.0 is a 
bridge release and recommend users to migrate all the readers to 1.x fully 
before starting to enable any new features for 1.x tables.
+    - Example scenarios. say, we have lf1_10_25, lf2_15_20(format 
"logfile[index]_[starttime]_[completiontime]") for a file slice. In 1.x reader, 
we will order and read it as lf2 followed by lf1. w/o this support in 0.16.0, 
we might read lf1 followed by lf2. Just to re-iterate this might only impact 
users who have enabled NBCC and having multi writers writing log files in 
different ordering. Even if they were using OCC, one of the writers is expected 
to have failed (on the writer side) since data is overlapping for two writers 
in 1.x writer.
+
+### FileSystemView:
+- Support ignoring partially failed log files from FSV. In 0.16.0, from FSV 
standpoint, all log files(including partially failed) are valid. We let the log 
record reader ignore the partially failed log files. But
+  in 1.x, log files could be rolledback (deleted) by a concurrent rollback. 
So, the FSV should ensure it ignores the uncommitted log files. 
+- We don't need the completion time logic ported over either for file slice 
determination nor for log file ordering.\
+    - So, here is how File Slice determination will happen using 0.16.0 
reader.\
+      a. Read base files and assign to resp file groups. \
+      b. Read log files. Parse instant time from log file name (it could refer 
to base instant time for a file written in 0.16.x, or it could refer to delta 
commit in case of 1.x writer). Find largest (<=) base instant times in the 
corresponding file group
+      and assign the log file to it. \
+      d. Log files within a file slice are ordered based on log version and 
write tokens.\
+      The same logic will be used whether we are reading a 0.16.x table or its 
a 1.x table. Only difference wrt how a 1.x reader will behave in comparison to 
0.16.x reader while reading a 1.x table is when NBCC is involved w/ 
multi-writers. But as of this writing, 

Review Comment:
   In summary, there is no additional change for file slicing if we just port 
the 1.x file slicing into 0.16.x, it should work fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to