dtenedor commented on code in PR #40732:
URL: https://github.com/apache/spark/pull/40732#discussion_r1165912480


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveDefaultColumns.scala:
##########
@@ -47,9 +47,11 @@ import org.apache.spark.sql.types._
  * (1, 5)
  * (4, 6)
  *
- * @param catalog  the catalog to use for looking up the schema of INSERT INTO 
table objects.
+ * @param resolveRelation function to resolve relations from the catalog. This 
should generally map
+ *                        to the 'resolveRelationOrTempView' method of the 
ResolveRelations rule.
  */
-case class ResolveDefaultColumns(catalog: SessionCatalog) extends 
Rule[LogicalPlan] {
+case class ResolveDefaultColumns(
+    resolveRelation: UnresolvedRelation => LogicalPlan) extends 
Rule[LogicalPlan] {

Review Comment:
   Good question, the answer is because the `ResolveRelations` rule is an 
object that is nested inside the `Analyzer`. So we'd either need to pass a 
reference to the current analyzer in order to do that, or pass a reference to 
the `ResolveRelations` object as a whole (which is what I did before), or 
otherwise pass a reference to the method of interest (which is what Gengliang 
asked me to do instead). We don't anticipate any need to use a different 
resolver anywhere, we just have to make the method visible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to