vinothchandar commented on a change in pull request #1333: [HUDI-589][DOCS] Fix 
querying_data page
URL: https://github.com/apache/incubator-hudi/pull/1333#discussion_r379225601
 
 

 ##########
 File path: docs/_docs/2_3_querying_data.md
 ##########
 @@ -84,55 +102,53 @@ using the hive session property for incremental queries: 
`set hive.fetch.task.co
 would ensure Map Reduce execution is chosen for a Hive query, which combines 
partitions (comma
 separated) and calls InputFormat.listStatus() only once with all those 
partitions.
 
-## Spark
+## Spark datasource
 
-Spark provides much easier deployment & management of Hudi jars and bundles 
into jobs/notebooks. At a high level, there are two ways to access Hudi tables 
in Spark.
+Hudi COPY_ON_WRITE tables can be queried via Spark datasource similar to how 
standard datasources work (e.g: `spark.read.parquet`). 
+Both snapshot querying and incremental querying are supported here. Typically 
spark jobs require adding `--jars <path to 
jar>/hudi-spark-bundle_2.11:0.5.1-incubating`
+to classpath of drivers and executors. Refer [building 
Hudi](https://github.com/apache/incubator-hudi#building-apache-hudi-from-source)
 for build instructions. 
+When using spark shell instead of --jars, --packages can also be used to fetch 
the hudi-spark-bundle like this: `--packages 
org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating`
 
 Review comment:
   use `--jars` `--packages`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to