Hi Ryan,

Thanks for hosting the discussion! I think the table catalog is super
useful, but since this is the first time we allow users to extend catalog,
it's better to write down some details from end-user APIs to internal
management.
1. How would end-users register/unregister catalog with SQL API and
Scala/Java API?
2. How would end-users manage catalogs? like LIST CATALOGS, USE CATALOG xyz?
3. How to separate the abilities of catalog? Can we create a bunch of mixin
triats for catalog API like SupportsTable, SupportsFunction, SupportsView,
etc.?
4. How should Spark resolve identifies with catalog name? How to resolve
ambiguity? What if the catalog doesn't support database? Can users write
`catalogName.tblName` directly?
5. Where does Spark store the catalog list? In an in-memory map?
6. How to support atomic CTAS?
7. The data/schema of table may change over time, when should Spark
determine the table content? During analysis or planning?
8. ...

Since the catalog API is not only developer facing, but also user-facing, I
think it's better to have a doc explaining what the developers concern and
what the end users concern. The doc is also good for future reference, and
can be used in release notes.

Thanks,
Wenchen

On Wed, Nov 28, 2018 at 12:54 PM JackyLee <qcsd2...@163.com> wrote:

> +1
>
> Please add me to the Google Hangout invite.
>
>
>
> --
> Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>

Reply via email to