Pranav2612000 commented on issue #3945: URL: https://github.com/apache/arrow-adbc/issues/3945#issuecomment-3852374416
This seems to be the default Snowflake behaviour ( https://community.snowflake.com/s/article/How-to-block-uploads-if-the-schema-of-a-Parquet-file-and-the-schema-of-a-table-do-not-match ) As the community answer suggests, we may need to check the schema first and then proceed with appending the data if it matches. Is this something we want as part of the driver? I'll be happy to contribute. ( Not sure about the effect it may have on ingestion speeds ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
