[ https://issues.apache.org/jira/browse/HUDI-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063813#comment-17063813 ]
lamber-ken commented on HUDI-718: --------------------------------- Hi [~afilipchik], the master version use spark-2.4.4 which dependes on avro-1.82. here is a pr[1] which fix the similar issue, if you are interested, you can have a try [1] [https://github.com/apache/incubator-hudi/pull/1339] !image-2020-03-21-16-49-28-905.png|width=790,height=631! > java.lang.ClassCastException during upsert > ------------------------------------------ > > Key: HUDI-718 > URL: https://issues.apache.org/jira/browse/HUDI-718 > Project: Apache Hudi (incubating) > Issue Type: Bug > Components: DeltaStreamer > Reporter: Alexander Filipchik > Priority: Major > Fix For: 0.6.0 > > Attachments: image-2020-03-21-16-49-28-905.png > > > Dataset was created using hudi 0.5 and now trying to migrate it to the latest > master. The table is written using SqlTransformer. Exception: > > Caused by: org.apache.hudi.exception.HoodieUpsertException: Failed to merge > old record into new file for key bla.bla from old file > gs://../2020/03/15/7b75931f-ff2f-4bf4-8949-5c437112be79-0_0-35-1196_20200316234140.parquet > to new file > gs://.../2020/03/15/7b75931f-ff2f-4bf4-8949-5c437112be79-0_1-39-1506_20200317190948.parquet > at org.apache.hudi.io.HoodieMergeHandle.write(HoodieMergeHandle.java:246) > at > org.apache.hudi.table.HoodieCopyOnWriteTable$UpdateHandler.consumeOneRecord(HoodieCopyOnWriteTable.java:433) > at > org.apache.hudi.table.HoodieCopyOnWriteTable$UpdateHandler.consumeOneRecord(HoodieCopyOnWriteTable.java:423) > at > org.apache.hudi.common.util.queue.BoundedInMemoryQueueConsumer.consume(BoundedInMemoryQueueConsumer.java:37) > at > org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$2(BoundedInMemoryExecutor.java:121) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > Caused by: java.lang.ClassCastException: org.apache.avro.util.Utf8 cannot be > cast to org.apache.avro.generic.GenericFixed > at > org.apache.parquet.avro.AvroWriteSupport.writeValueWithoutConversion(AvroWriteSupport.java:336) > at > org.apache.parquet.avro.AvroWriteSupport.writeValue(AvroWriteSupport.java:275) > at > org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:191) > at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) > at > org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128) > at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) > at > org.apache.hudi.io.storage.HoodieParquetWriter.writeAvro(HoodieParquetWriter.java:103) > at org.apache.hudi.io.HoodieMergeHandle.write(HoodieMergeHandle.java:242) > ... 8 more -- This message was sent by Atlassian Jira (v8.3.4#803005)