umehrot2 commented on a change in pull request #1702: URL: https://github.com/apache/hudi/pull/1702#discussion_r465353608
########## File path: hudi-spark/src/main/scala/org/apache/hudi/DefaultSource.scala ########## @@ -54,29 +58,54 @@ class DefaultSource extends RelationProvider val parameters = Map(QUERY_TYPE_OPT_KEY -> DEFAULT_QUERY_TYPE_OPT_VAL) ++ translateViewTypesToQueryTypes(optParams) val path = parameters.get("path") - if (path.isEmpty) { - throw new HoodieException("'path' must be specified.") - } if (parameters(QUERY_TYPE_OPT_KEY).equals(QUERY_TYPE_SNAPSHOT_OPT_VAL)) { - // this is just effectively RO view only, where `path` can contain a mix of - // non-hoodie/hoodie path files. set the path filter up - sqlContext.sparkContext.hadoopConfiguration.setClass( - "mapreduce.input.pathFilter.class", - classOf[HoodieROTablePathFilter], - classOf[org.apache.hadoop.fs.PathFilter]) - - log.info("Constructing hoodie (as parquet) data source with options :" + parameters) - log.warn("Snapshot view not supported yet via data source, for MERGE_ON_READ tables. " + - "Please query the Hive table registered using Spark SQL.") - // simply return as a regular parquet relation - DataSource.apply( - sparkSession = sqlContext.sparkSession, - userSpecifiedSchema = Option(schema), - className = "parquet", - options = parameters) - .resolveRelation() + val readPathsStr = parameters.get(DataSourceReadOptions.READ_PATHS_OPT_KEY) Review comment: These additional paths are being used in the **Incremental query** code to make it work for bootstrapped tables. I need to pass a list of bootstrapped files to read, and that is why had to add support for reading from multiple paths. `spark.read.parquet` already has that kind of support and is being used in **incremental relation** already to read a list of files. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org