Repository: spark
Updated Branches:
  refs/heads/branch-2.2 e6bbdb0c5 -> 8d658b90b


Fixed typos in docs

## What changes were proposed in this pull request?

Typos at a couple of place in the docs.

## How was this patch tested?

build including docs

Please review http://spark.apache.org/contributing.html before opening a pull 
request.

Author: ymahajan <ymaha...@snappydata.io>

Closes #17690 from ymahajan/master.

(cherry picked from commit bdc60569196e9ae4e9086c3e514a406a9e8b23a6)
Signed-off-by: Reynold Xin <r...@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8d658b90
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8d658b90
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8d658b90

Branch: refs/heads/branch-2.2
Commit: 8d658b90b9f08ed4a3a899aad5d3ea77986b7302
Parents: e6bbdb0
Author: ymahajan <ymaha...@snappydata.io>
Authored: Wed Apr 19 20:08:31 2017 -0700
Committer: Reynold Xin <r...@databricks.com>
Committed: Wed Apr 19 20:08:37 2017 -0700

----------------------------------------------------------------------
 docs/sql-programming-guide.md                  | 2 +-
 docs/structured-streaming-programming-guide.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/8d658b90/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 28942b6..490c1ce 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -571,7 +571,7 @@ be created by calling the `table` method on a 
`SparkSession` with the name of th
 For file-based data source, e.g. text, parquet, json, etc. you can specify a 
custom table path via the
 `path` option, e.g. `df.write.option("path", "/some/path").saveAsTable("t")`. 
When the table is dropped,
 the custom table path will not be removed and the table data is still there. 
If no custom table path is
-specifed, Spark will write data to a default table path under the warehouse 
directory. When the table is
+specified, Spark will write data to a default table path under the warehouse 
directory. When the table is
 dropped, the default table path will be removed too.
 
 Starting from Spark 2.1, persistent datasource tables have per-partition 
metadata stored in the Hive metastore. This brings several benefits:

http://git-wip-us.apache.org/repos/asf/spark/blob/8d658b90/docs/structured-streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index 3cf7151..5b18cf2 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -778,7 +778,7 @@ windowedCounts = words \
 In this example, we are defining the watermark of the query on the value of 
the column "timestamp", 
 and also defining "10 minutes" as the threshold of how late is the data 
allowed to be. If this query 
 is run in Update output mode (discussed later in [Output Modes](#output-modes) 
section), 
-the engine will keep updating counts of a window in the Resule Table until the 
window is older 
+the engine will keep updating counts of a window in the Result Table until the 
window is older
 than the watermark, which lags behind the current event time in column 
"timestamp" by 10 minutes.
 Here is an illustration. 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to