vinothchandar commented on a change in pull request #1260: [WIP] [HUDI-510] 
Update site documentation in sync with cWiki
URL: https://github.com/apache/incubator-hudi/pull/1260#discussion_r368784217
 
 

 ##########
 File path: docs/_docs/2_2_writing_data.md
 ##########
 @@ -156,41 +157,31 @@ inputDF.write()
 
 ## Syncing to Hive
 
-Both tools above support syncing of the dataset's latest schema to Hive 
metastore, such that queries can pick up new columns and partitions.
+Both tools above support syncing of the table's latest schema to Hive 
metastore, such that queries can pick up new columns and partitions.
 In case, its preferable to run this from commandline or in an independent jvm, 
Hudi provides a `HiveSyncTool`, which can be invoked as below, 
-once you have built the hudi-hive module.
+once you have built the hudi-hive module. Following is how we sync the above 
Datasource Writer written table to Hive metastore.
+
+```java
+cd hudi-hive
+./run_sync_tool.sh  --jdbc-url jdbc:hive2:\/\/hiveserver:10000 --user hive 
--pass hive --partitioned-by partition --base-path <basePath> --database 
default --table <tableName>
+```
+
+Starting with Hudi 0.5.1 version read optimized version of merge-on-read 
tables are suffixed '_ro' by default. For backwards compatibility with older 
Hudi versions, 
+an optional HiveSyncConfig - `--skip-ro-suffix`, has been provided to turn off 
'_ro' suffixing if desired. Explore other hive sync options using the following 
command:
 
 ```java
 cd hudi-hive
 ./run_sync_tool.sh
  [hudi-hive]$ ./run_sync_tool.sh --help
-Usage: <main class> [options]
-  Options:
-  * --base-path
-       Basepath of Hudi dataset to sync
-  * --database
-       name of the target database in Hive
-    --help, -h
-       Default: false
-  * --jdbc-url
-       Hive jdbc connect url
-  * --use-jdbc
-       Whether to use jdbc connection or hive metastore (via thrift)
-  * --pass
-       Hive password
-  * --table
-       name of the target table in Hive
-  * --user
-       Hive username
 ```
 
 ## Deletes 
 
-Hudi supports implementing two types of deletes on data stored in Hudi 
datasets, by enabling the user to specify a different record payload 
implementation. 
+Hudi supports implementing two types of deletes on data stored in Hudi tables, 
by enabling the user to specify a different record payload implementation. 
 
 Review comment:
   lets link to the delete blog from here? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to