yihua commented on code in PR #10957: URL: https://github.com/apache/hudi/pull/10957#discussion_r1569772713
########## hudi-common/src/main/java/org/apache/hudi/common/table/read/HoodieBaseFileGroupRecordBuffer.java: ########## @@ -242,7 +252,44 @@ protected Pair<ClosableIterator<T>, Schema> getRecordsIterator(HoodieDataBlock d } else { blockRecordsIterator = dataBlock.getEngineRecordIterator(readerContext); } - return Pair.of(blockRecordsIterator, dataBlock.getSchema()); + Option<Pair<Function<T,T>, Schema>> schemaEvolutionTransformerOpt = Review Comment: To clarify, do we put the common schema evolution logic in the file group reader or the record buffer classes? If that's the case, Spark parquet reader does not have to handle schema evolution, and we have common logic in the file group reader or the record buffer classes for schema on read. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org