young138120 opened a new issue, #10423:
URL: https://github.com/apache/hudi/issues/10423

   **Describe the problem you faced**
   I am currently writing data to a MOR partitioned table using Flink. When 
reading with Spark, I want to ignore newly added uncompressed log data
   Due to conflicts in data types, there is a data type conversion failure when 
reading incremental log files
   I try to set  hoodie.datasource.query.type=read_optimized but it not work
   
![image](https://github.com/apache/hudi/assets/11519151/00b533cd-4342-4b67-811a-78bd58c0aed6)
   
   still throw the exception
   <img width="960" 
alt="1703666014836_lQLPKHjaf5_T9ePNA5DNB4CwyTdasw0XwmsFfDhCMqVIAA_1920_912" 
src="https://github.com/apache/hudi/assets/11519151/1c740699-db89-487c-b1f6-2e47a9109361";>
   
   I am not use the way that read().load()  ,I am querying directly like 
spark.sql("select xxx from table")
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :
   0.9.0
   * Spark version :
   3.1.1
   * Hive version :
   3.1.0
   * Hadoop version :
   3.1.1
   * Storage (HDFS/S3/GCS..) :
   HDFS
   * Running on Docker? (yes/no) :
   NO
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to