Repository: spark
Updated Branches:
  refs/heads/master 6e0596e26 -> 8ab8ef773


Fix minor typo in docs/cloud-integration.md

## What changes were proposed in this pull request?

Minor typo in docs/cloud-integration.md

## How was this patch tested?

This is trivial enough that it should not affect tests.

Please review http://spark.apache.org/contributing.html before opening a pull 
request.

Author: Jim Kleckner <j...@cloudphysics.com>

Closes #21629 from jkleckner/fix-doc-typo.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8ab8ef77
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8ab8ef77
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8ab8ef77

Branch: refs/heads/master
Commit: 8ab8ef7733b42e94f687a5520332814ac9caeda8
Parents: 6e0596e
Author: Jim Kleckner <j...@cloudphysics.com>
Authored: Mon Jun 25 16:23:23 2018 +0800
Committer: hyukjinkwon <gurwls...@apache.org>
Committed: Mon Jun 25 16:23:23 2018 +0800

----------------------------------------------------------------------
 docs/cloud-integration.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/8ab8ef77/docs/cloud-integration.md
----------------------------------------------------------------------
diff --git a/docs/cloud-integration.md b/docs/cloud-integration.md
index ac1c336..18e8fe7 100644
--- a/docs/cloud-integration.md
+++ b/docs/cloud-integration.md
@@ -70,7 +70,7 @@ be safely used as the direct destination of work with the 
normal rename-based co
 ### Installation
 
 With the relevant libraries on the classpath and Spark configured with valid 
credentials,
-objects can be can be read or written by using their URLs as the path to data.
+objects can be read or written by using their URLs as the path to data.
 For example `sparkContext.textFile("s3a://landsat-pds/scene_list.gz")` will 
create
 an RDD of the file `scene_list.gz` stored in S3, using the s3a connector.
 
@@ -184,7 +184,8 @@ is no need for a workflow of write-then-rename to ensure 
that files aren't picke
 while they are still being written. Applications can write straight to the 
monitored directory.
 
 1. Streams should only be checkpointed to a store implementing a fast and
-atomic `rename()` operation Otherwise the checkpointing may be slow and 
potentially unreliable.
+atomic `rename()` operation.
+Otherwise the checkpointing may be slow and potentially unreliable.
 
 ## Further Reading
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to