yihua commented on code in PR #18478:
URL: https://github.com/apache/hudi/pull/18478#discussion_r3046519819
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieMergedReadHandle.java:
##########
@@ -56,26 +56,24 @@ public class HoodieMergedReadHandle<T, I, K, O> extends
HoodieReadHandle<T, I, K
protected final Schema baseFileReaderSchema;
private final Option<FileSlice> fileSliceOpt;
- public HoodieMergedReadHandle(HoodieWriteConfig config,
- Option<String> instantTime,
- HoodieTable<T, I, K, O> hoodieTable,
- Pair<String, String> partitionPathFileIDPair) {
- this(config, instantTime, hoodieTable, partitionPathFileIDPair,
Option.empty());
+ public HoodieMergedReadHandle(HoodieWriteConfig config, Option<String>
instantTime,
+ HoodieTable<T, I, K, O> hoodieTable,
Pair<String, String> partitionPathFileIDPair,
+ Schema baseFileReaderSchema, boolean
hasTimestampFields) {
+ this(config, instantTime, hoodieTable, partitionPathFileIDPair,
baseFileReaderSchema, hasTimestampFields, Option.empty());
}
- public HoodieMergedReadHandle(HoodieWriteConfig config,
- Option<String> instantTime,
- HoodieTable<T, I, K, O> hoodieTable,
- Pair<String, String> partitionPathFileIDPair,
+ public HoodieMergedReadHandle(HoodieWriteConfig config, Option<String>
instantTime,
+ HoodieTable<T, I, K, O> hoodieTable,
Pair<String, String> partitionPathFileIDPair,
+ Schema baseFileReaderSchema, boolean
hasTimestampFields,
Option<FileSlice> fileSliceOption) {
super(config, instantTime, hoodieTable, partitionPathFileIDPair);
Schema orignalReaderSchema = HoodieAvroUtils.addMetadataFields(new
Schema.Parser().parse(config.getSchema()),
config.allowOperationMetadataField());
// config.getSchema is not canonicalized, while config.getWriteSchema is
canonicalized. So, we have to use the canonicalized schema to read the existing
data.
- baseFileReaderSchema = HoodieAvroUtils.addMetadataFields(new
Schema.Parser().parse(config.getWriteSchema()),
config.allowOperationMetadataField());
+ this.baseFileReaderSchema = baseFileReaderSchema;
fileSliceOpt = fileSliceOption.isPresent() ? fileSliceOption :
getLatestFileSlice();
// Repair reader schema.
// Assume writer schema should be correct. If not, no repair happens.
- readerSchema = AvroSchemaUtils.getRepairedSchema(orignalReaderSchema,
baseFileReaderSchema);
+ readerSchema = hasTimestampFields ?
AvroSchemaUtils.getRepairedSchema(orignalReaderSchema,
this.baseFileReaderSchema) : orignalReaderSchema;
Review Comment:
🤖 nit: the comment above this line should be updated to clarify that the
repair is now conditional — it only happens when `hasTimestampFields` is true,
rather than always repairing.
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/index/HoodieIndexUtils.java:
##########
@@ -244,8 +245,10 @@ private static <R> HoodieData<HoodieRecord<R>>
getExistingRecords(
.filterCompletedInstants()
.lastInstant()
.map(HoodieInstant::getTimestamp);
+ Schema baseFileReaderSchema = HoodieAvroUtils.addMetadataFields(new
Schema.Parser().parse(config.getWriteSchema()),
config.allowOperationMetadataField());
Review Comment:
🤖 nit: the schema setup pattern on these two lines is duplicated identically
in HoodieBackedTableMetadataWriter.java (lines 590–591) and
TestHoodieMergedReadHandle.java (lines 204–205) — could you extract this into a
static helper method to reduce duplication?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]