This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
     new 80fe1ed  [MINOR][DOC] ForeachBatch doc fix.
80fe1ed is described below

commit 80fe1ed4a6974ed5083e5602fe364bc8955d2f8c
Author: Gabor Somogyi <gabor.g.somo...@gmail.com>
AuthorDate: Sat May 25 00:03:59 2019 +0900

    [MINOR][DOC] ForeachBatch doc fix.
    
    ## What changes were proposed in this pull request?
    
    ForeachBatch doc is wrongly formatted. This PR formats it.
    
    ## How was this patch tested?
    
    ```
    cd docs
    SKIP_API=1 jekyll build
    ```
    Manual webpage check.
    
    Closes #24698 from gaborgsomogyi/foreachbatchdoc.
    
    Authored-by: Gabor Somogyi <gabor.g.somo...@gmail.com>
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
---
 docs/structured-streaming-programming-guide.md | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index f0971ab..a93f65b 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -2086,12 +2086,20 @@ With `foreachBatch`, you can do the following.
   cause the output data to be recomputed (including possible re-reading of the 
input data). To avoid recomputations,
   you should cache the output DataFrame/Dataset, write it to multiple 
locations, and then uncache it. Here is an outline.  
 
-    streamingDF.writeStream.foreachBatch { (batchDF: DataFrame, batchId: Long) 
=>
-      batchDF.persist()
-      batchDF.write.format(...).save(...)  // location 1
-      batchDF.write.format(...).save(...)  // location 2
-      batchDF.unpersist()
-    }
+<div class="codetabs">
+<div data-lang="scala"  markdown="1">
+
+{% highlight scala %}
+streamingDF.writeStream.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
+  batchDF.persist()
+  batchDF.write.format(...).save(...)  // location 1
+  batchDF.write.format(...).save(...)  // location 2
+  batchDF.unpersist()
+}
+{% endhighlight %}
+
+</div>
+</div>
 
 - **Apply additional DataFrame operations** - Many DataFrame and Dataset 
operations are not supported 
   in streaming DataFrames because Spark does not support generating 
incremental plans in those cases. 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to