RussellSpitzer commented on code in PR #46831:
URL: https://github.com/apache/spark/pull/46831#discussion_r1683560156


##########
common/variant/shredding.md:
##########
@@ -0,0 +1,244 @@
+# Shredding Overview
+
+The Spark Variant type is designed to store and process semi-structured data 
efficiently, even with heterogeneous values. Query engines encode each variant 
value in a self-describing format, and store it as a group containing **value** 
and **metadata** binary fields in Parquet. Since data is often partially 
homogenous, it can be beneficial to extract certain fields into separate 
Parquet columns to further improve performance. We refer to this process as 
"shredding". Each Parquet file remains fully self-describing, with no 
additional metadata required to read or fully reconstruct the Variant data from 
the file. Combining shredding with a binary residual provides the flexibility 
to represent complex, evolving data with an unbounded number of unique fields 
while limiting the size of file schemas, and retaining the performance benefits 
of a columnar format.
+
+This document focuses on the shredding semantics, Parquet representation, 
implications for readers and writers, as well as the Variant reconstruction. 
For now, it does not discuss which fields to shred, user-facing API changes, or 
any engine-specific considerations like how to use shredded columns. The 
approach builds on top of the generic Spark Variant representation, and 
leverages the existing Parquet specification for maximum compatibility with the 
open-source ecosystem.
+
+At a high level, we replace the **value** and **metadata** of the Variant 
Parquet group with one or more fields called **object**, **array**, 
**typed_value** and **untyped_value**. These represent a fixed schema suitable 
for constructing the full Variant value for each row.
+
+Shredding lets Spark (or any other query engine) reap the full benefits of 
Parquet's columnar representation, such as more compact data encoding, min/max 
statistics for data skipping, and I/O and CPU savings from pruning unnecessary 
fields not accessed by a query (including the non-shredded Variant binary data).
+Without shredding, any query that accesses a Variant column must fetch all 
bytes of the full binary buffer. With shredding, we can get nearly equivalent 
performance as in a relational (scalar) data model.
+
+For example, `select variant_get(variant_col, ‘$.field1.inner_field2’, 
‘string’) from tbl` only needs to access `inner_field2`, and the file scan 
could avoid fetching the rest of the Variant value if this field was shredded 
into a separate column in the Parquet schema. Similarly, for the query `select 
* from tbl where variant_get(variant_col, ‘$.id’, ‘integer’) = 123`, the scan 
could first decode the shredded `id` column, and only fetch/decode the full 
Variant value for rows that pass the filter.
+
+# Parquet Example
+
+Consider the following Parquet schema together with how Variant values might 
be mapped to it. Notice that we represent each shredded field in **object** as 
a group of two fields, **typed_value** and **untyped_value**. We extract all 
homogenous data items of a certain path into **typed_value**, and set aside 
incompatible data items in **untyped_value**. Intuitively, incompatibilities 
within the same path may occur because we store the shredding schema per 
Parquet file, and each file can contain several row groups. Selecting a type 
for each field that is acceptable for all rows would be impractical because it 
would require buffering the contents of an entire file before writing.
+
+Typically, the expectation is that **untyped_value** exists at every level as 
an option, along with one of **object**, **array** or **typed_value**. If the 
actual Variant value contains a type that does not match the provided schema, 
it is stored in **untyped_value**. An **untyped_value** may also be populated 
if an object can be partially represented: any fields that are present in the 
schema must be written to those fields, and any missing fields are written to 
**untyped_valud**.
+
+```
+optional group variant_col {
+ optional binary untyped_value;
+ optional group object {
+  optional group a {
+   optional binary untyped_value;
+   optional int64 typed_value;
+  }
+  optional group b {
+   optional binary untyped_value;
+   optional group object {
+    optional group c {
+      optional binary untyped_value;
+      optional binary typed_value (STRING);
+    }
+   }
+  }
+ }
+}
+```
+
+| Variant Value | Top-level untyped_value | b.untyped_value | Non-null in a | 
Non-null in b.c |
+|---------------|--------------------------|---------------|---------------|

Review Comment:
   Unbalanced table - https://github.com/apache/spark/pull/47407



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to