Zouxxyy commented on code in PR #461:
URL: https://github.com/apache/parquet-format/pull/461#discussion_r1914049795


##########
VariantEncoding.md:
##########
@@ -39,13 +39,42 @@ Another motivation for the representation is that (aside 
from metadata) each nes
 For example, in a Variant containing an Array of Variant values, the 
representation of an inner Variant value, when paired with the metadata of the 
full variant, is itself a valid Variant.
 
 This document describes the Variant Binary Encoding scheme.
-[VariantShredding.md](VariantShredding.md) describes the details of the 
Variant shredding scheme.
+Variant fields can also be _shredded_.
+Shredding refers to extracting some elements of the variant into separate 
columns for more efficient extraction/filter pushdown.
+The [Variant Shredding specification](VariantShredding.md) describes the 
details of shredding Variant values as typed Parquet columns.
+
+## Variant in Parquet
 
-# Variant in Parquet
 A Variant value in Parquet is represented by a group with 2 fields, named 
`value` and `metadata`.
-Both fields `value` and `metadata` are of type `binary`, and cannot be `null`.
 
-# Metadata encoding
+* The Variant group must be annotated with the `VARIANT` logical type.
+* Both fields `value` and `metadata` must be of type `binary` (called 
`BYTE_ARRAY` in the Parquet thrift definition).
+* The `metadata` field is `required` and must be a valid Variant metadata, as 
defined below.
+* The `value` field must be annotated as `required` for unshredded Variant 
values, or `optional` if parts of the value are [shredded](VariantShredding.md) 
as typed Parquet columns.
+* When present, the `value` field must be a valid Variant value, as defined 
below. 
+
+This is the expected unshredded representation in Parquet:
+
+```
+optional group variant_name (VARIANT) {
+  required binary metadata;

Review Comment:
   > Hi @Zouxxyy, in the Spark shredding PRs I've been working on, I put 
metadata first. I didn't see much benefit to changing the order in the existing 
non-shredded code, but I don't feel too strongly about it either way. The spec 
is pretty clear that readers should identify the appropriate columns based on 
field names, not field order, and I think things could become quite fragile if 
they did rely on field order.
   
   Thank you, but future users or developers may find it strange if the actual 
implementation differs from the specs. I think it's better to adhere to the 
specs as long as it doesn’t affect performance, before the official release of 
spark 4.0. If you don’t mind, I can work on this and raise a PR to spark, WDYT



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to