[GitHub] [hudi] ad1happy2go commented on issue #9282: [ISSUE] Hudi 0.13.0. Spark 3.3.2 Deltastreamed table read failure

2023-07-26 Thread via GitHub


ad1happy2go commented on issue #9282:
URL: https://github.com/apache/hudi/issues/9282#issuecomment-1651511555

   @rmnlchh I couldn't reproduce this issue. 
   
   Code I tried - 
https://gist.github.com/ad1happy2go/1391a679de49efa1872563062f04e29b
   
   Can you let us know the schema of the table, I can try with all the datatype 
combinations as you have in your dataset. Can you check the code also once in 
case I missed anything.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] ad1happy2go commented on issue #9282: [ISSUE] Hudi 0.13.0. Spark 3.3.2 Deltastreamed table read failure

2023-07-25 Thread via GitHub


ad1happy2go commented on issue #9282:
URL: https://github.com/apache/hudi/issues/9282#issuecomment-1650223209

   @rmnlchh Just curious, Did you set these configs 
   ```
   sc.set("spark.sql.legacy.parquet.nanosAsLong", "false");
   sc.set("spark.sql.parquet.binaryAsString", "false");
   sc.set("spark.sql.parquet.int96AsTimestamp", "true");
   sc.set("spark.sql.caseSensitive", "false");
   ```
   with your deltastreamer also? 
   I will try to reproduce this issue .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org