[ https://issues.apache.org/jira/browse/SPARK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15589723#comment-15589723 ]
Nattavut Sutyanyong commented on SPARK-15777: --------------------------------------------- It is not clear to me how we will apply multiple sets of rules from a SQL statement referencing objects from two or more data sources. For instance, we have DataSource1 implemented with Analyzer rules set#1, DataSource2 Analyzer rules set#2, in addition to the built-in Analyzer rules set#0. If the input SQL statement is a join between an object of Datasource1 and an object of Datasource2, in which order do we apply the three sets of Analyzer rules on this statement? Specifically, which sets of Analyzer rules do we apply on the Join operator? > Catalog federation > ------------------ > > Key: SPARK-15777 > URL: https://issues.apache.org/jira/browse/SPARK-15777 > Project: Spark > Issue Type: New Feature > Components: SQL > Reporter: Reynold Xin > Attachments: SparkFederationDesign.pdf > > > This is a ticket to track progress to support federating multiple external > catalogs. This would require establishing an API (similar to the current > ExternalCatalog API) for getting information about external catalogs, and > ability to convert a table into a data source table. > As part of this, we would also need to be able to support more than a > two-level table identifier (database.table). At the very least we would need > a three level identifier for tables (catalog.database.table). A possibly > direction is to support arbitrary level hierarchical namespaces similar to > file systems. > Once we have this implemented, we can convert the current Hive catalog > implementation into an external catalog that is "mounted" into an internal > catalog. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org