YannByron commented on code in PR #5737:
URL: https://github.com/apache/hudi/pull/5737#discussion_r888915090


##########
hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/analysis/HoodieSpark3Analysis.scala:
##########
@@ -45,16 +45,22 @@ case class HoodieSpark3Analysis(sparkSession: SparkSession) 
extends Rule[Logical
   with SparkAdapterSupport with ProvidesHoodieConfig {
 
   override def apply(plan: LogicalPlan): LogicalPlan = 
plan.resolveOperatorsDown {
-    case dsv2 @ DataSourceV2Relation(d: HoodieInternalV2Table, _, _, _, _) =>
-      val output = dsv2.output
-      val catalogTable = if (d.catalogTable.isDefined) {
-        Some(d.v1Table)
-      } else {
-        None
-      }
-      val relation = new DefaultSource().createRelation(new 
SQLContext(sparkSession),
-        buildHoodieConfig(d.hoodieCatalogTable))
-      LogicalRelation(relation, output, catalogTable, isStreaming = false)
+    // NOTE: This step is required since Hudi relations don't currently 
implement DS V2 Read API
+    case dsv2 @ DataSourceV2Relation(tbl: HoodieInternalV2Table, _, _, _, _) =>
+      val qualifiedTableName = QualifiedTableName(tbl.v1Table.database, 
tbl.v1Table.identifier.table)
+      val catalog = sparkSession.sessionState.catalog
+
+      catalog.getCachedPlan(qualifiedTableName, () => {

Review Comment:
   no. v1 and v1 are just different in the internal workings of spark, and 
having a few more configurations. The interface to the user is not actually 
affected.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to