97harsh opened a new issue, #3469: URL: https://github.com/apache/polaris/issues/3469
### Is your feature request related to a problem? Please describe. Polaris supports per-table branching via Iceberg's branch refs, but operating on branches across many tables is manual and non-atomic.We have 100s of tables. The pain points: - No cross-table branch operations: Creating/merging a branch for N tables requires N individual API calls - No atomic multi-table promotion: Merging tables one-by-one means clients can observe inconsistent states across tables mid-promotion. - Slow branch listing at scale: Discovering which tables have a given branch requires loading each table's metadata individually ### Describe the solution you'd like Catalog-level branching that enables: - Creating/merging a branch across a namespace or set of tables in a single operation - Atomic promotion so all tables flip together - Efficient branch listing without loading each table's metadata Conceptually: ``` polaris branch create backfill-2024 --from main --namespace my_ns # ... run operations against branch ... polaris branch merge backfill-2024 --into main # All tables promoted atomically ``` ### Describe alternatives you've considered **Nessie**: Provides catalog-level versioning with Git-like semantics, atomic multi-table commits, and efficient branch operations—but requires adopting a separate catalog **Scripted per-table branching**: What we do today—operationally complex, non-atomic, doesn't scale well **Blue-green table swaps**: Duplicate tables, swap via rename—error-prone at 200 tables ### Additional context Nessie has proven this pattern works well. Given discussions around Polaris + Nessie convergence, this seems like a natural feature to bring into Polaris. Happy to contribute to design/implementation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
