Repository: spark
Updated Branches:
  refs/heads/branch-2.1 46e212d2f -> b2970d971


[MINOR][DOCS] Fix spacings in Structured Streaming Programming Guide

## What changes were proposed in this pull request?

1. Omitted space between the sentences: `... on static data.The Spark SQL 
engine will ...` -> `... on static data. The Spark SQL engine will ...`
2. Omitted colon in Output Model section.

## How was this patch tested?

None.

Author: Lee Dongjin <dong...@apache.org>

Closes #17564 from dongjinleekr/feature/fix-programming-guide.

(cherry picked from commit b9384382484a9f5c6b389742e7fdf63865de81c0)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b2970d97
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b2970d97
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b2970d97

Branch: refs/heads/branch-2.1
Commit: b2970d971b108c519eedb6ad06e6ed16c7386d0c
Parents: 46e212d
Author: Lee Dongjin <dong...@apache.org>
Authored: Wed Apr 12 09:12:14 2017 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Wed Apr 12 09:12:23 2017 +0100

----------------------------------------------------------------------
 docs/structured-streaming-programming-guide.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/b2970d97/docs/structured-streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index f73cf93..da5c234 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -8,7 +8,7 @@ title: Structured Streaming Programming Guide
 {:toc}
 
 # Overview
-Structured Streaming is a scalable and fault-tolerant stream processing engine 
built on the Spark SQL engine. You can express your streaming computation the 
same way you would express a batch computation on static data.The Spark SQL 
engine will take care of running it incrementally and continuously and updating 
the final result as streaming data continues to arrive. You can use the 
[Dataset/DataFrame API](sql-programming-guide.html) in Scala, Java or Python to 
express streaming aggregations, event-time windows, stream-to-batch joins, etc. 
The computation is executed on the same optimized Spark SQL engine. Finally, 
the system ensures end-to-end exactly-once fault-tolerance guarantees through 
checkpointing and Write Ahead Logs. In short, *Structured Streaming provides 
fast, scalable, fault-tolerant, end-to-end exactly-once stream processing 
without the user having to reason about streaming.*
+Structured Streaming is a scalable and fault-tolerant stream processing engine 
built on the Spark SQL engine. You can express your streaming computation the 
same way you would express a batch computation on static data. The Spark SQL 
engine will take care of running it incrementally and continuously and updating 
the final result as streaming data continues to arrive. You can use the 
[Dataset/DataFrame API](sql-programming-guide.html) in Scala, Java or Python to 
express streaming aggregations, event-time windows, stream-to-batch joins, etc. 
The computation is executed on the same optimized Spark SQL engine. Finally, 
the system ensures end-to-end exactly-once fault-tolerance guarantees through 
checkpointing and Write Ahead Logs. In short, *Structured Streaming provides 
fast, scalable, fault-tolerant, end-to-end exactly-once stream processing 
without the user having to reason about streaming.*
 
 **Structured Streaming is still ALPHA in Spark 2.1** and the APIs are still 
experimental. In this guide, we are going to walk you through the programming 
model and the APIs. First, let's start with a simple example - a streaming word 
count. 
 
@@ -368,7 +368,7 @@ A query on the input will generate the "Result Table". 
Every trigger interval (s
 
 ![Model](img/structured-streaming-model.png)
 
-The "Output" is defined as what gets written out to the external storage. The 
output can be defined in different modes 
+The "Output" is defined as what gets written out to the external storage. The 
output can be defined in a different mode:
 
   - *Complete Mode* - The entire updated Result Table will be written to the 
external storage. It is up to the storage connector to decide how to handle 
writing of the entire table. 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to