voonhous opened a new pull request, #11576: URL: https://github.com/apache/hudi/pull/11576
…JavaObject discrepancies in Hive3 and Hive2 The invocation of `getPrimitiveJavaObject` returns a different implementation of timestamp in Hive3 and Hive2. Hive2: `java.sql.Timestamp` Hive3: `org.apache.hadoop.hive.common.type.Timestamp` Hudi common is compiled with Hive2, but Trino is using Hive3, causing the discrepancy between compile and runtime. When execution flow falls into this section of the code where the trigger conditions are listed below: 1. MOR table is used 2. User is querying the _rt table 3. User's table has a TIMESTAMP type and query requires it 4. Merge is required as record is present in both Parquet and Log file Error below will be thrown: ``` Query 20240704_075218_05052_yfmfc failed: 'java.sql.Timestamp org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(java.lang.Object)' java.lang.NoSuchMethodError: 'java.sql.Timestamp org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.getPrimitiveJavaObject(java.lang.Object)' at org.apache.hudi.hadoop.utils.HiveAvroSerializer.serializePrimitive(HiveAvroSerializer.java:304) at org.apache.hudi.hadoop.utils.HiveAvroSerializer.serialize(HiveAvroSerializer.java:212) at org.apache.hudi.hadoop.utils.HiveAvroSerializer.setUpRecordFieldFromWritable(HiveAvroSerializer.java:121) at org.apache.hudi.hadoop.utils.HiveAvroSerializer.serialize(HiveAvroSerializer.java:108) at org.apache.hudi.hadoop.realtime.RealtimeCompactedRecordReader.convertArrayWritableToHoodieRecord(RealtimeCompactedRecordReader.java:185) at org.apache.hudi.hadoop.realtime.RealtimeCompactedRecordReader.mergeRecord(RealtimeCompactedRecordReader.java:172) at org.apache.hudi.hadoop.realtime.RealtimeCompactedRecordReader.next(RealtimeCompactedRecordReader.java:114) at org.apache.hudi.hadoop.realtime.RealtimeCompactedRecordReader.next(RealtimeCompactedRecordReader.java:49) at org.apache.hudi.hadoop.realtime.HoodieRealtimeRecordReader.next(HoodieRealtimeRecordReader.java:88) at org.apache.hudi.hadoop.realtime.HoodieRealtimeRecordReader.next(HoodieRealtimeRecordReader.java:36) at io.trino.plugin.hive.GenericHiveRecordCursor.advanceNextPosition(GenericHiveRecordCursor.java:215) at io.trino.spi.connector.RecordPageSource.getNextPage(RecordPageSource.java:88) at io.trino.plugin.hudi.HudiPageSource.getNextPage(HudiPageSource.java:120) ``` For more details on screenshots + SQL to reproduce, please refer to the JIRA ticket [HUDI-7955](https://issues.apache.org/jira/browse/HUDI-7955). ### Change Logs Added Hive Shimming for `WritableTimestampObjectInspector#getPrimitiveJavaObject`. ### Impact None ### Risk level (write none, low medium or high below) None ### Documentation Update _Describe any necessary documentation update if there is any new feature, config, or user-facing change. If not, put "none"._ - _The config description must be updated if new configs are added or the default value of the configs are changed_ - _Any new feature or user-facing change requires updating the Hudi website. Please create a Jira ticket, attach the ticket number here and follow the [instruction](https://hudi.apache.org/contribute/developer-setup#website) to make changes to the website._ ### Contributor's checklist - [X] Read through [contributor's guide](https://hudi.apache.org/contribute/how-to-contribute) - [X] Change Logs and Impact were stated clearly - [ ] Adequate tests were added if applicable - [ ] CI passed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org