alexeykudinkin commented on code in PR #7915: URL: https://github.com/apache/hudi/pull/7915#discussion_r1105014257
########## hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/avro/SchemaConverters.scala: ########## @@ -202,6 +202,13 @@ private[sql] object SchemaConverters { st.foreach { f => val fieldAvroType = toAvroType(f.dataType, f.nullable, f.name, childNameSpace) + val fieldBuilder = fieldsAssembler.name(f.name).`type`(fieldAvroType) Review Comment: From what i understand so far the issue is not in the conversion, but in the fact that we're not handling schema evolution properly in `HoodieAvroDataBlock` -- whenever we decode a record from an existing data block we should make sure that any nullable field has actually null as default value so that Avro reader is able to decode the data in case this particular field is not present -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org