hudi-bot opened a new issue, #17222:
URL: https://github.com/apache/hudi/issues/17222

   Hi Guys!
    
   I am not able to use timestamp micro columns save with HUDI. 
   I would like to save it keeping microsec granularity, but it only keeps 
milisec.
    
   I have set this:
   --conf spark.sql.parquet.outputTimestampType=TIMESTAMP_MICROS \
   and also this in the hoodie:
   "hoodie.parquet.outputtimestamptype": "TIMESTAMP_MICROS",
   but when I read it back (with pyspark, load api), it's only millisecond 
precision and unfortunately, I need the microsec in some case, because with 
this I run into a Schrödinger's cat situation   
!https://a.slack-edge.com/production-standard-emoji-assets/13.0/google-medium/1f604.png!
   So an entity has more than one states in the same time 
!https://a.slack-edge.com/production-standard-emoji-assets/13.0/google-medium/1f604.png!Can
 someone enlighten me what should I do?
    
   Before the save, everything is fine! ("ts" column)
   
   Darvi
   SLACK Thread: 
[https://apache-hudi.slack.com/archives/C4D716NPQ/p1652347742173779]
    
   
   ## JIRA info
   
   - Link: https://issues.apache.org/jira/browse/HUDI-4091
   - Type: Sub-task
   - Parent: https://issues.apache.org/jira/browse/HUDI-9113
   - Affects version(s):
     - 0.10.1
   - Fix version(s):
     - 1.1.0
   - Attachment(s):
     - 12/May/22 
11:37;Darvi77;b97b9e55-58a4-417b-b71c-f6b2d3860da0-0_0-26-1663_20220512111505310.parquet;https://issues.apache.org/jira/secure/attachment/13043566/b97b9e55-58a4-417b-b71c-f6b2d3860da0-0_0-26-1663_20220512111505310.parquet
     - 12/May/22 
11:38;Darvi77;before-save.png;https://issues.apache.org/jira/secure/attachment/13043565/before-save.png
     - 12/May/22 
11:37;Darvi77;example-code.txt;https://issues.apache.org/jira/secure/attachment/13043567/example-code.txt


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to