dtenedor commented on code in PR #36672:
URL: https://github.com/apache/spark/pull/36672#discussion_r886234949


##########
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedParquetRecordReader.java:
##########
@@ -270,13 +271,40 @@ private void initBatch(
         vectors[i + partitionIdx].setIsConstant();
       }
     }
+
+    // For Parquet tables whose columns have associated DEFAULT values, this 
reader must return
+    // those values instead of NULL when the corresponding columns are not 
present in storage (i.e.
+    // belong to the 'missingColumns' field in this class).
+    ColumnVector[] finalColumns = new 
ColumnVector[sparkSchema.fields().length];
+    for (int i = 0; i < columnVectors.length; i++) {
+      Object defaultValue = sparkRequestedSchema.existenceDefaultValues()[i];
+      if (defaultValue == null) {
+        finalColumns[i] = vectors[i];
+      } else {
+        WritableColumnVector writable;
+        if (memMode == MemoryMode.OFF_HEAP) {
+          writable = new OffHeapColumnVector(capacity, vectors[i].dataType());

Review Comment:
   Sure, I added a comment explaining this (`appendObjects` delegates to other 
existing methods like `appendFloats`). And I made a change to reuse the 
existing ColumnVector instead of creating a new one.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to