mkk1490 commented on issue #3313:
URL: https://github.com/apache/hudi/issues/3313#issuecomment-928853775
> @mkk1490 : sorry the issue got lengthy and I have got a couple of
clarifications.
> Is your issue is with record key fields having one component as timestamp
or is it about
mkk1490 commented on issue #3313:
URL: https://github.com/apache/hudi/issues/3313#issuecomment-928853775
> @mkk1490 : sorry the issue got lengthy and I have got a couple of
clarifications.
> Is your issue is with record key fields having one component as timestamp
or is it about
mkk1490 commented on issue #3313:
URL: https://github.com/apache/hudi/issues/3313#issuecomment-889627136
@nsivabalan I set the row_writer property to False and ingested the data.
Now, timestamp gets converted to their respective epoch seconds and long
datatype in hoodie_key
![image](ht
mkk1490 commented on issue #3313:
URL: https://github.com/apache/hudi/issues/3313#issuecomment-888248088
@nsivabalan Update: The timestamp field get converted to long in hoodie_key
as below:
![image](https://user-images.githubusercontent.com/16716227/127317865-04c77073-b950-42cc-80b
mkk1490 commented on issue #3313:
URL: https://github.com/apache/hudi/issues/3313#issuecomment-886010620
@nsivabalan I'm so sorry. That's my mistake. I'm trying to update the next
field to src_pri_psbr_id which is pri_az_cust_id. Please find the dfs below:
Insert:
df_ins = spark.c
mkk1490 commented on issue #3313:
URL: https://github.com/apache/hudi/issues/3313#issuecomment-884708261
@nsivabalan during bulk_insert as well as insert for the first time into the
table, the hoodie_key value was in timestamp. But during the upsert operation,
it was converted into long. I