szehon-ho commented on code in PR #5376:
URL: https://github.com/apache/iceberg/pull/5376#discussion_r1026578079
##########
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/data/TestHelpers.java:
##########
@@ -817,4 +824,93 @@ public static Set<String> reachableManifestPaths(Table
table) {
.map(ManifestFile::path)
.collect(Collectors.toSet());
}
+
+ public static GenericData.Record asMetadataRecordWithMetrics(
+ Table dataTable, GenericData.Record file) {
+ return asMetadataRecordWithMetrics(dataTable, file, FileContent.DATA);
+ }
+
+ public static GenericData.Record asMetadataRecordWithMetrics(
+ Table dataTable, GenericData.Record file, FileContent content) {
+
+ Table filesTable =
+ MetadataTableUtils.createMetadataTableInstance(dataTable,
MetadataTableType.FILES);
+
+ GenericData.Record record =
+ new GenericData.Record(AvroSchemaUtil.convert(filesTable.schema(),
"dummy"));
+ boolean isPartitioned =
Partitioning.partitionType(dataTable).fields().size() != 0;
+ int filesFields = isPartitioned ? 17 : 16;
+ for (int i = 0; i < filesFields; i++) {
+ if (i == 0) {
+ record.put(0, content.id());
+ } else if (i == 3) {
+ record.put(3, 0); // spec id
+ } else {
+ record.put(i, file.get(i));
+ }
+ }
+ record.put(
+ isPartitioned ? 17 : 16,
+ expectedReadableMetrics(
Review Comment:
Yea I think that would be nice, changing these tests are definitely the most
painful part of this (and related changes).
But the GenericRecord here is an Avro class that doesnt have any select
methods. It has a get() but it returns a field and not a projected record.
We'd have to maybe try to make a struct based on get() all 15 non-derived
DataFile fields, not sure if that's cleaner?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]