umehrot2 commented on a change in pull request #1702: URL: https://github.com/apache/hudi/pull/1702#discussion_r466320096
########## File path: hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala ########## @@ -54,29 +58,54 @@ class DefaultSource extends RelationProvider val parameters = Map(QUERY_TYPE_OPT_KEY -> DEFAULT_QUERY_TYPE_OPT_VAL) ++ translateViewTypesToQueryTypes(optParams) val path = parameters.get("path") - if (path.isEmpty) { - throw new HoodieException("'path' must be specified.") - } if (parameters(QUERY_TYPE_OPT_KEY).equals(QUERY_TYPE_SNAPSHOT_OPT_VAL)) { - // this is just effectively RO view only, where `path` can contain a mix of - // non-hoodie/hoodie path files. set the path filter up - sqlContext.sparkContext.hadoopConfiguration.setClass( - "mapreduce.input.pathFilter.class", - classOf[HoodieROTablePathFilter], - classOf[org.apache.hadoop.fs.PathFilter]) - - log.info("Constructing hoodie (as parquet) data source with options :" + parameters) - log.warn("Snapshot view not supported yet via data source, for MERGE_ON_READ tables. " + - "Please query the Hive table registered using Spark SQL.") - // simply return as a regular parquet relation - DataSource.apply( - sparkSession = sqlContext.sparkSession, - userSpecifiedSchema = Option(schema), - className = "parquet", - options = parameters) - .resolveRelation() + val readPathsStr = parameters.get(DataSourceReadOptions.READ_PATHS_OPT_KEY) Review comment: Well right now I added it only for our internal logic to support incremental query on bootstrapped tables. Would you want customers to use this otherwise as well, to be able to provide multiple read paths for querying ? Is that the ask here ? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org