rdblue commented on code in PR #15328:
URL: https://github.com/apache/iceberg/pull/15328#discussion_r2819443502
##########
spark/v4.1/spark/src/jmh/java/org/apache/iceberg/spark/data/parquet/SparkParquetWritersNestedDataBenchmark.java:
##########
@@ -121,10 +126,28 @@ public void writeUsingSparkWriter() throws IOException {
.set("spark.sql.parquet.outputTimestampType", "TIMESTAMP_MICROS")
.set("spark.sql.caseSensitive", "false")
.set("spark.sql.parquet.fieldId.write.enabled", "false")
+ .set("spark.sql.parquet.variant.annotateLogicalType.enabled",
"false")
.schema(SCHEMA)
.build()) {
writer.addAll(rows);
}
}
+
+ @Benchmark
+ @Threads(1)
+ public void writeUsingRegistryWriter() throws IOException {
+ try (DataWriter<InternalRow> writer =
+ FormatModelRegistry.dataWriteBuilder(
+ FileFormat.PARQUET,
+ InternalRow.class,
+
EncryptedFiles.plainAsEncryptedOutput(Files.localOutput(dataFile)))
+ .schema(SCHEMA)
+ .engineSchema(SparkSchemaUtil.convert(SCHEMA))
Review Comment:
What's the downside to fixing this now? It seems like this is going to lead
to adding code only to risk not removing it in the future. Since we have an
easy way to default the write schema, it seems like that's what we should do.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]