rangareddy opened a new issue, #581:
URL: https://github.com/apache/incubator-xtable/issues/581

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-xtable/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### Please describe the bug 🐞
   
   Team, I have converted Hudi table to Iceberg table using Xtable. From athena 
if i query the table getting the following error:
   
   
   ICEBERG_BAD_DATA: Field **last_modified_time's** type INT64 in parquet file 
s3a://<bucket><table_name>/<partiton_name>/<parquet_file_name>.parquet is 
incompatible with type **timestamp(6)** with time zone defined in table schema
   This query ran against the "<database_name>" database, unless qualified by 
the query. Please post the error message on our [forum 
](https://forums.aws.amazon.com/forum.jspa?forumID=242&start=0) or contact 
[customer support 
](https://us-east-1.console.aws.amazon.com/support/home?#/case/create?issueType=technical&serviceCode=amazon-athena&categoryCode=query-related-issue)
 with Query Id: 1f0401d0-584e-4eec-8a2d-9f719a85973c
   
   Hudi Table Schema:
   
   ```sql
   CREATE EXTERNAL TABLE `default.my_table`(
     `_hoodie_commit_time` string, 
     `_hoodie_commit_seqno` string, 
     `_hoodie_record_key` string, 
     `_hoodie_partition_path` string, 
     `_hoodie_file_name` string, 
     `my_col` double, 
     `last_modified_time` bigint)
   PARTITIONED BY ( 
     `partiton_id` string)
   ROW FORMAT SERDE 
     'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' 
   WITH SERDEPROPERTIES ( 
     'hoodie.query.as.ro.table'='false', 
     'path'='s3a://<bucket_name>/my_table') 
   STORED AS INPUTFORMAT 
     'org.apache.hudi.hadoop.HoodieParquetInputFormat' 
   OUTPUTFORMAT 
     'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
   LOCATION
     's3a://<bucket_name>/my_table'
   TBLPROPERTIES (
     'bucketing_version'='2', 
     'hudi.metadata-listing-enabled'='FALSE', 
     'isRegisteredWithLakeFormation'='false', 
     'last_commit_completion_time_sync'='20241121011339000', 
     'last_commit_time_sync'='20241121011254282', 
     'last_modified_by'='hadoop', 
     'last_modified_time'='1732162935', 
     'spark.sql.create.version'='3.5.2-amzn-1', 
     'spark.sql.sources.provider'='hudi', 
     'spark.sql.sources.schema.numPartCols'='1', 
     'spark.sql.sources.schema.numParts'='1', 
     
'spark.sql.sources.schema.part.0'='{\"type\":\"struct\",\"fields\":[{\"name\":\"_hoodie_commit_time\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},
     
{\"name\":\"_hoodie_commit_seqno\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"_hoodie_record_key\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},
 
{\"name\":\"_hoodie_partition_path\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"_hoodie_file_name\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},
     
{\"name\":\"my_col\",\"type\":\"double\",\"nullable\":true,\"metadata\":{}},{\"name\":\"last_modified_time\",\"type\":\"timestamp\",\"nullable\":true,\"metadata\":{}},
     
{\"name\":\"partiton_id\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}}]}',
 
     'spark.sql.sources.schema.partCol.0'='partiton_id', 
     'transient_lastDdlTime'='1732162935')
   ```
   
   
   ### Are you willing to submit PR?
   
   - [ ] I am willing to submit a PR!
   - [ ] I am willing to submit a PR but need help getting started!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to