vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish 
deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data 
Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369647404
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or 
additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting 
similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer 
or custom Spark datasource jobs), deployed per standard Apache Spark 
[recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache 
Spark or Presto and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone 
utility to incrementally pull upstream changes from varied sources such as DFS, 
Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark 
application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion 
round which includes incrementally pulling events from upstream sources and 
ingesting them to hudi table. Background operations like cleaning old file 
versions and archiving hoodie timeline are automatically executed as part of 
the run. For Merge-On-Read tables, Compaction is also run inline as part of 
ingestion unless disabled by passing the flag "--disable-compaction". By 
default, Compaction is run inline for every ingestion run and this can be 
changed by setting the property "hoodie.compact.inline.max.delta.commits". You 
can either manually run this spark application or use any cron trigger or 
workflow orchestrator such as Apache Airflow to spawn this application.
 
 Review comment:
   >You can either manually run this spark application or use any cron trigger 
or workflow orchestrator such as Apache Airflow to spawn this application.
   
   link to running this spark application and some commands to do so? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to