purna344 commented on issue #9404:
URL: https://github.com/apache/iceberg/issues/9404#issuecomment-1885023931

   If the producers write the data in storage by setting the below config value 
   `spark.conf.set("spark.databricks.delta.writePartitionColumnsToParquet", 
"false")`
   Then *.parquet file does not have the partition columns related information 
and partition values are stored in the file path.
   It is not possible for us to ask producers don't set this config value in 
their spark jobs and publish the data.
   I heard that Iceberg format expect the partition values in the parquet file. 
How to handle this scenario and does iceberg support any config parameter for 
to read the partition values from the folder path?
   CC: @amogh-jahagirdar 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to