rahij opened a new issue, #357: URL: https://github.com/apache/iceberg-python/issues/357
### Feature Request / Improvement I am trying to understand how the new arrow write API can work with distributed writes similar to spark. I have a use case where from different machines, I would like to write a separate arrow dataset that all get committed in the same iceberg transaction. I assume this should be theoretically possible as it works with spark, but I was wondering if there are any plans to support this in the arrow write API. Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
