sririshindra opened a new pull request, #3563: URL: https://github.com/apache/polaris/pull/3563
**The Problem**: Currently, Polaris enforces a 1:1 mapping between a Catalog and a set of storage credentials. This forces all tables within a catalog to exist on a single storage backend. This restriction makes it impossible to build logical catalogs (e.g., "Marketing") that group tables from disparate data sources (e.g., S3 and Ozone) under a single namespace. **The Solution**: This PR implements Table-Level Storage Credential Overrides. - Granularity: Allows specific tables to define their own storage credentials via table properties, overriding the catalog defaults. - Vending Support: Updates the credential vending flow to respect these overrides securely. - Flexibility: Transforms the Catalog into a truly logical structure, agnostic of the underlying physical storage locations of its tables. Reference: [Detailed Design Doc](https://docs.google.com/document/d/1tf4N8GKeyAAYNoP0FQ1zT1Ba3P1nVGgdw3nmnhSm-u0/edit?usp=sharing) **TODO**: - As this is a POC, this is currently only Implemente for S3. It needs to be extended for GCS and Azure etc... - Secure the table level credentials from unauthorized access - Add Admin Controls for policy definition. <!-- ๐ Describe what changes you're proposing, especially breaking or user-facing changes. ๐ See https://github.com/apache/polaris/blob/main/CONTRIBUTING.md for more. --> ## Checklist - [x] ๐ก๏ธ Don't disclose security issues! (contact [email protected]) - [x] ๐ Clearly explained why the changes are needed, or linked related issues: Fixes # - [x] ๐งช Added/updated tests with good coverage, or manually tested (and explained how) - [x] ๐ก Added comments for complex logic - [x] ๐งพ Updated `CHANGELOG.md` (if needed) - [x] ๐ Updated documentation in `site/content/in-dev/unreleased` (if needed) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
