Github user ioana-delaney commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15363#discussion_r106790425
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala 
---
    @@ -20,19 +20,347 @@ package org.apache.spark.sql.catalyst.optimizer
     import scala.annotation.tailrec
     
     import org.apache.spark.sql.catalyst.expressions._
    -import org.apache.spark.sql.catalyst.planning.ExtractFiltersAndInnerJoins
    +import 
org.apache.spark.sql.catalyst.planning.{ExtractFiltersAndInnerJoins, 
PhysicalOperation}
     import org.apache.spark.sql.catalyst.plans._
     import org.apache.spark.sql.catalyst.plans.logical._
     import org.apache.spark.sql.catalyst.rules._
    +import org.apache.spark.sql.catalyst.CatalystConf
    +
    +/**
    + * Encapsulates star-schema join detection.
    + */
    +case class StarSchemaDetection(conf: CatalystConf) extends PredicateHelper 
{
    +
    +  /**
    +   * Star schema consists of one or more fact tables referencing a number 
of dimension
    +   * tables. In general, star-schema joins are detected using the 
following conditions:
    +   *  1. Informational RI constraints (reliable detection)
    +   *    + Dimension contains a primary key that is being joined to the 
fact table.
    +   *    + Fact table contains foreign keys referencing multiple dimension 
tables.
    +   *  2. Cardinality based heuristics
    +   *    + Usually, the table with the highest cardinality is the fact 
table.
    +   *    + Table being joined with the most number of tables is the fact 
table.
    +   *
    +   * To detect star joins, the algorithm uses a combination of the above 
two conditions.
    +   * The fact table is chosen based on the cardinality heuristics, and the 
dimension
    +   * tables are chosen based on the RI constraints. A star join will 
consist of the largest
    +   * fact table joined with the dimension tables on their primary keys. To 
detect that a
    +   * column is a primary key, the algorithm uses table and column 
statistics.
    +   *
    +   * Since Catalyst only supports left-deep tree plans, the algorithm 
currently returns only
    +   * the star join with the largest fact table. Choosing the largest fact 
table on the
    +   * driving arm to avoid large inners is in general a good heuristic. 
This restriction can
    +   * be lifted with support for bushy tree plans.
    +   *
    +   * The highlights of the algorithm are the following:
    +   *
    +   * Given a set of joined tables/plans, the algorithm first verifies if 
they are eligible
    +   * for star join detection. An eligible plan is a base table access with 
valid statistics.
    +   * A base table access represents Project or Filter operators above a 
LeafNode. Conservatively,
    +   * the algorithm only considers base table access as part of a star join 
since they provide
    +   * reliable statistics.
    +   *
    +   * If some of the plans are not base table access, or statistics are not 
available, the algorithm
    +   * returns an empty star join plan since, in the absence of statistics, 
it cannot make
    +   * good planning decisions. Otherwise, the algorithm finds the table 
with the largest cardinality
    +   * (number of rows), which is assumed to be a fact table.
    +   *
    +   * Next, it computes the set of dimension tables for the current fact 
table. A dimension table
    +   * is assumed to be in a RI relationship with a fact table. To infer 
column uniqueness,
    +   * the algorithm compares the number of distinct values with the total 
number of rows in the
    +   * table. If their relative difference is within certain limits (i.e. 
ndvMaxError * 2, adjusted
    +   * based on 1TB TPC-DS data), the column is assumed to be unique.
    +   */
    +  def findStarJoins(
    +      input: Seq[LogicalPlan],
    +      conditions: Seq[Expression]): Seq[Seq[LogicalPlan]] = {
    +
    +    val emptyStarJoinPlan = Seq.empty[Seq[LogicalPlan]]
    +
    +    if (!conf.starSchemaDetection || input.size < 2) {
    +      emptyStarJoinPlan
    +    } else {
    +      // Find if the input plans are eligible for star join detection.
    +      // An eligible plan is a base table access with valid statistics.
    +      val foundEligibleJoin = input.forall {
    +        case PhysicalOperation(_, _, t: LeafNode) if 
t.stats(conf).rowCount.isDefined => true
    +        case _ => false
    +      }
    +
    +      if (!foundEligibleJoin) {
    +        // Some plans don't have stats or are complex plans. 
Conservatively,
    +        // return an empty star join. This restriction can be lifted
    +        // once statistics are propagated in the plan.
    +        emptyStarJoinPlan
    +      } else {
    +        // Find the fact table using cardinality based heuristics i.e.
    +        // the table with the largest number of rows.
    +        val sortedFactTables = input.map { plan =>
    +          TableAccessCardinality(plan, getTableAccessCardinality(plan))
    +        }.collect { case t @ TableAccessCardinality(_, Some(_)) =>
    +          t
    +        }.sortBy(_.size)(implicitly[Ordering[Option[BigInt]]].reverse)
    +
    +        sortedFactTables match {
    +          case Nil =>
    +            emptyStarJoinPlan
    +          case table1 :: table2 :: _
    +            if table2.size.get.toDouble > conf.starSchemaFTRatio * 
table1.size.get.toDouble =>
    +            // If the top largest tables have comparable number of rows, 
return an empty star plan.
    +            // This restriction will be lifted when the algorithm is 
generalized
    +            // to return multiple star plans.
    +            emptyStarJoinPlan
    +          case TableAccessCardinality(factTable, _) :: _ =>
    +            // Find the fact table joins.
    +            val allFactJoins = input.filterNot { plan =>
    +              plan eq factTable
    +            }.filter { plan =>
    +              val joinCond = findJoinConditions(factTable, plan, 
conditions)
    +              joinCond.nonEmpty
    +            }
    +
    +            // Find the corresponding join conditions.
    +            val allFactJoinCond = allFactJoins.flatMap { plan =>
    +              val joinCond = findJoinConditions(factTable, plan, 
conditions)
    +              joinCond
    +            }
    +
    +            // Verify if the join columns have valid statistics
    +            val areStatsAvailable = allFactJoins.forall { dimTable =>
    +              allFactJoinCond.exists {
    +                case BinaryComparison(lhs: AttributeReference, rhs: 
AttributeReference) =>
    +                  val dimCol = if (dimTable.outputSet.contains(lhs)) lhs 
else rhs
    +                  val factCol = if (factTable.outputSet.contains(lhs)) lhs 
else rhs
    +                  hasStatistics(dimCol, dimTable) && 
hasStatistics(factCol, factTable)
    +                case _ => false
    +              }
    +            }
    +
    +            if (!areStatsAvailable) {
    +              emptyStarJoinPlan
    +            } else {
    +              // Find the subset of dimension tables. A dimension table is 
assumed to be in
    +              // RI relationship with the fact table. Also, only consider 
equi-joins
    +              // between a fact and a dimension table.
    +              val eligibleDimPlans = allFactJoins.filter { dimTable =>
    +                allFactJoinCond.exists {
    +                  case cond @ BinaryComparison(lhs: AttributeReference, 
rhs: AttributeReference)
    --- End diff --
    
    @cloud-fan Done. Thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to