it should be minimally difficult to switch this around on the
>> Iceberg side, we only have to move the initialize code out and duplicate
>> it. Not a huge cost
>>
>> On Sun, Sep 22, 2024 at 11:39 PM Wenchen Fan wrote:
>>
>>> It's a buggy behavior that a custo
It's a buggy behavior that a custom v2 catalog (without extending
DelegatingCatalogExtension) expects Spark to still use the v1 DDL commands
to operate on the tables inside it. This is also why the third-party
catalogs (e.g. Unity Catalog and Apache Polaris) can not be used to
overwrite `spark_cata