Re: [I] [SUPPORT]When hudi integrates hive, an error is reported when the hive external table is queried [hudi]

2023-11-16 Thread via GitHub


danny0405 commented on issue #10084:
URL: https://github.com/apache/hudi/issues/10084#issuecomment-1815611713

   Flink 1.13.1 should use Parquet 1.11 right? Have you checked the project 
parquet version for other modules so you do not package multiple parquet jars 
in one shot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [SUPPORT]When hudi integrates hive, an error is reported when the hive external table is queried [hudi]

2023-11-15 Thread via GitHub


Jackkaabe commented on issue #10084:
URL: https://github.com/apache/hudi/issues/10084#issuecomment-1813823608

   > @Jackkaabe This happens due to conflict with the parquet dependency. You 
can try shade the parquet jars and rebuild it by adding following configuration 
to the Flink-bundle pom.xml.
   > 
   > ```
   > 
   >   org.apache.parquet
   >   
${flink.bundle.shade.prefix}org.apache.parquet
   > 
   > ```
   > 
   > cc @danny0405
   
   I did it, but still got the same error.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [SUPPORT]When hudi integrates hive, an error is reported when the hive external table is queried [hudi]

2023-11-14 Thread via GitHub


ad1happy2go commented on issue #10084:
URL: https://github.com/apache/hudi/issues/10084#issuecomment-1811795799

   @Jackkaabe This happens due to conflict with the parquet dependency.
   You can try shade the parquet jars and rebuild it by adding following 
configuration to the Flink-bundle pom.xml.
   
   ```
   
 org.apache.parquet
 
${flink.bundle.shade.prefix}org.apache.parquet
   
   ```
   
   cc @danny0405 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [SUPPORT]When hudi integrates hive, an error is reported when the hive external table is queried [hudi]

2023-11-13 Thread via GitHub


Jackkaabe opened a new issue, #10084:
URL: https://github.com/apache/hudi/issues/10084

When hudi integrates hive, an error is reported when the hive external 
table is queried;
 Give an example(sql): 
   `select id from hive_ods_ tb_report_data_order_info_rt group by id;`
   
   `
  Error:
 org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.lang.reflect.InvocationTargetException
  at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
  at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
  at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
  at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:217)
  at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:345)
  at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:719)
  at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:175)
  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:444)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
   Caused by: java.lang.reflect.InvocationTargetException
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
  ... 11 more
   Caused by: java.lang.IllegalArgumentException: HoodieRealtimeRecordReader 
can only work on RealtimeSplit and not with 
hdfs://sgsdatacluster/user/hudi/warehouse/test_hudi/tb_report_data_order_info/5e08ca88-76ed-492e-8b01-ba4a6ae2f8b9_0-1-0_20230915155733914.parquet:0+796908
  at 
org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:40)
  at 
org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:61)
  at 
org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:99)
  ... 16 more`
   
  Then configure the hive client:
   `set 
hive.input.format=org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat; 
   set hoodie.hudimor.consume.mode=INCREMENTAL;
   set hoodie.hudimor.consume.max.commits=-1;`
   
  `Error:
 org.apache.hadoop.mapred.YarnChild: Error running child : 
java.lang.NoSuchMethodError: 
org.apache.parquet.schema.Types$PrimitiveBuilder.as(Lorg/apache/parquet/schema/LogicalTypeAnnotation;)Lorg/apache/parquet/schema/Types$Builder;
  at 
org.apache.parquet.avro.AvroSchemaConverter.convertField(AvroSchemaConverter.java:177)
  at 
org.apache.parquet.avro.AvroSchemaConverter.convertUnion(AvroSchemaConverter.java:242)
  at 
org.apache.parquet.avro.AvroSchemaConverter.convertField(AvroSchemaConverter.java:199)
  at 
org.apache.parquet.avro.AvroSchemaConverter.convertField(AvroSchemaConverter.java:152)
  at 
org.apache.parquet.avro.AvroSchemaConverter.convertField(AvroSchemaConverter.java:260)
  at 
org.apache.parquet.avro.AvroSchemaConverter.convertFields(AvroSchemaConverter.java:146)
  at 
org.apache.parquet.avro.AvroSchemaConverter.convert(AvroSchemaConverter.java:137)
  at 
org.apache.hudi.common.table.TableSchemaResolver.readSchemaFromLogFile(TableSchemaResolver.java:485)
  at 
org.apache.hudi.common.table.TableSchemaResolver.readSchemaFromLogFile(TableSchemaResolver.java:468)
  at 
org.apache.hudi.common.table.TableSchemaResolver.fetchSchemaFromFiles(TableSchemaResolver.java:604)
  at 
org.apache.hudi.common.table.TableSchemaResolver.getTableParquetSchemaFromDataFile(TableSchemaResolver.java:251)
  at 
org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchemaFromDataFile(TableSchemaResolver.java:117)
  at 
org.apache.hudi.common.table.TableSchemaResolver.hasOperationField(TableSchemaResolver.java:537)