wombatu-kun commented on code in PR #11559:
URL: https://github.com/apache/hudi/pull/11559#discussion_r1685073063


##########
rfc/rfc-80/rfc-80.md:
##########
@@ -0,0 +1,169 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# RFC-80: Support column families for wide tbles
+
+## Proposers
+
+- @xiarixiaoyao
+- @wombatu-kun
+
+## Approvers
+ - @vinothchandar
+ - @danny0405
+
+## Status
+
+JIRA: https://issues.apache.org/jira/browse/HUDI-
+
+## Abstract
+
+In streaming processing, there are often scenarios where the table is widened. 
The current mainstream real-time wide table concatenation is completed through 
Flink's multi-layer join;
+Flink's join will cache a large amount of data in the state backend. As the 
data set increases, the pressure on the Flink task state backend will gradually 
increase, and may even become unavailable.
+In multi-layer join scenarios, this problem is more obvious.  
+So, main gains of clustering columns for wide tables are:  
+Write performance:
+- Writing is similar to ordinary bucket writing, but it involves splitting 
column clusters and sorting. Therefore, the writing performance of full data is 
10% lower than that of native bucket writing.
+- However, if only some columns are updated among a large number of columns, 
the writing efficiency is much faster than that of non-column clustered tables. 
 
+
+Read performance:  
+Since the data is already sorted when it is written, the SortMerge method can 
be used directly to merge the data; compared with the native bucket data 
reading performance is improved a lot, and the memory consumption is reduced 
significantly.
+
+## Background
+Currently, Hudi organizes data according to fileGroup granularity. The 
fileGroup is further divided into column clusters to introduce the columnFamily 
concept.  
+The organizational form of Hudi files is divided according to the following 
rules:  
+The data in the partition is divided into buckets according to hash; the files 
in each bucket are divided according to columnFamily; multiple colFamily files 
in the bucket form a completed fileGroup; when there is only one columnFamily, 
it degenerates into the native Hudi bucket table.
+
+![table](table.png)
+
+After splitting the fileGroup by columnFamily, the naming rules for base files 
and log files change. We add the cfName suffix to all file names to facilitate 
Hudi itself to distinguish column families. The addition of this suffix is 
compatible with Hudi's original naming method and has no conflict.
+
+![filenames](filenames.png)
+
+## Implementation
+This feature should be implemented for both Spark and Flink. So, a table 
written by Flink this way, also can be read by Spark.
+
+### Constraints and Restrictions
+1. The overall design relies on the non-blocking concurrent writing feature of 
Hudi 1.0.  
+2. Lower version Hudi cannot read and write column family tables.  
+3. Only MOR bucketed tables support setting column families.  
+4. Column families do not support repartitioning and renaming.  
+5. Schema evolution does not take effect on the current column family table.  

Review Comment:
   To support Schema evolution we should assign new columns to families 
implicitly, but I don't think it  is possible.  
   Not supporting Schema evolution does not mean users can not add/delete 
columns in their table. They just need to do it explicitly by calling `ALTER 
TABLE table_name ADD COLUMN (new_column ...) SET TBLPROPERTIES 
('hoodie.columnFamily.colFamily'='a,b,new_column;a')`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to