Hello,

I am looking for input from anyone who has successfully replicated data from 
Azure Event Hubs (premium tier) using Kafka head into Iceberg tables in Azure 
Blob Storage (ADLSv2) using the Iceberg Sink Connector. We are using 
Snowflake’s open Iceberg catalog.
When connected, a metadata folder with a single .json file populates, but no 
data folder. When the table is queried, the correct schema is returned but the 
table is empty.

And the moment I terminate the active connector, a data folder is created and 
populated by a single parquet file. However, the table remains empty when 
queried.

Thanks for any insight.

Tanner

This email and any attachments are confidential and may contain trade secret 
and/or privileged material. If you are not the intended recipient of this 
information, do not review, re-transmit, disclose, disseminate, use, or take 
any action in reliance upon, this information. If you have received this 
message in error, please advise the sender immediately by reply email and 
delete this message.

In connection with our business, Vantage may collect and process your personal 
data. For further information regarding how we use this data, please see our 
online privacy notice at https://www.vantagerisk.com/privacy-policy/

Reply via email to