Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16233#discussion_r94682588
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ---
    @@ -510,32 +539,93 @@ class Analyzer(
        * Replaces [[UnresolvedRelation]]s with concrete relations from the 
catalog.
        */
       object ResolveRelations extends Rule[LogicalPlan] {
    -    private def lookupTableFromCatalog(u: UnresolvedRelation): LogicalPlan 
= {
    +
    +    // If the unresolved relation is running directly on files, we just 
return the original
    +    // UnresolvedRelation, the plan will get resolved later. Else we look 
up the table from catalog
    +    // and change the default database name if it is a view.
    +    // We usually look up a table from the default database if the table 
identifier has an empty
    +    // database part, for a view the default database should be the 
currentDb when the view was
    +    // created. When the case comes to resolving a nested view, the view 
may have different default
    +    // database with that the referenced view has, so we need to use the 
variable `defaultDatabase`
    +    // to track the current default database.
    +    // When the relation we resolve is a view, we fetch the 
view.desc(which is a CatalogTable), and
    +    // then set the value of `CatalogTable.viewDefaultDatabase` to the 
variable `defaultDatabase`,
    +    // we look up the relations that the view references using the default 
database.
    +    // For example:
    +    // |- view1 (defaultDatabase = db1)
    +    //   |- operator
    +    //     |- table2 (defaultDatabase = db1)
    +    //     |- view2 (defaultDatabase = db2)
    +    //        |- view3 (defaultDatabase = db3)
    +    //   |- view4 (defaultDatabase = db4)
    +    // In this case, the view `view1` is a nested view, it directly 
references `table2`、`view2`
    +    // and `view4`, the view `view2` references `view3`. On resolving the 
table, we look up the
    +    // relations `table2`、`view2`、`view4` using the default database 
`db1`, and look up the
    +    // relation `view3` using the default database `db2`.
    +    //
    +    // Note this is compatible with the views defined by older versions of 
Spark(before 2.2), which
    +    // have empty defaultDatabase and all the relations in viewText have 
database part defined.
    +    def resolveRelation(
    +        plan: LogicalPlan,
    +        defaultDatabase: Option[String] = None): LogicalPlan = plan match {
    +      case u @ UnresolvedRelation(table: TableIdentifier, _) if 
isRunningDirectlyOnFiles(table) =>
    +        u
    +      case u: UnresolvedRelation =>
    +        val defaultDatabase = AnalysisContext.get.defaultDatabase
    +        val relation = lookupTableFromCatalog(u, defaultDatabase)
    +        resolveRelation(relation, defaultDatabase)
    +      // Hive support is required to resolve a persistent view, the 
logical plan returned by
    +      // catalog.lookupRelation() should be:
    +      // `SubqueryAlias(_, View(desc: CatalogTable, desc.output, child: 
LogicalPlan), _)`,
    +      // where the child should be a logical plan parsed from 
`desc.viewText`.
    +      // If the child of a view is empty, we will throw an 
AnalysisException later in
    +      // `checkAnalysis`.
    +      case view @ View(desc, _, Some(child)) =>
    +        val context = AnalysisContext(defaultDatabase = 
desc.viewDefaultDatabase)
    --- End diff --
    
    Also set the nestedViewLevel?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to