Repository: spark
Updated Branches:
  refs/heads/branch-1.0 3143e51d7 -> 886508d3b


Docs: monitoring, streaming programming guide

Fix several awkward wordings and grammatical issues in the following
documents:

*   docs/monitoring.md

*   docs/streaming-programming-guide.md

Author: kballou <kbal...@devnulllabs.io>

Closes #1662 from kennyballou/grammar_fixes and squashes the following commits:

e1b8ad6 [kballou] Docs: monitoring, streaming programming guide

(cherry picked from commit cc820502fb08f71b03237103153c34487b2600b4)
Signed-off-by: Josh Rosen <joshro...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/886508d3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/886508d3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/886508d3

Branch: refs/heads/branch-1.0
Commit: 886508d3b56c882b1b935f5b00bb4ce69444c219
Parents: 3143e51
Author: kballou <kbal...@devnulllabs.io>
Authored: Thu Jul 31 14:58:52 2014 -0700
Committer: Josh Rosen <joshro...@apache.org>
Committed: Thu Jul 31 14:59:07 2014 -0700

----------------------------------------------------------------------
 docs/monitoring.md                  | 4 ++--
 docs/streaming-programming-guide.md | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/886508d3/docs/monitoring.md
----------------------------------------------------------------------
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 2b9e9e5..8578a39 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -33,7 +33,7 @@ application's UI after the application has finished.
 
 If Spark is run on Mesos or YARN, it is still possible to reconstruct the UI 
of a finished
 application through Spark's history server, provided that the application's 
event logs exist.
-You can start a the history server by executing:
+You can start the history server by executing:
 
     ./sbin/start-history-server.sh <base-logging-directory>
 
@@ -97,7 +97,7 @@ represents an application's event logs. This creates a web 
interface at
     <td>
       Indicates whether the history server should use kerberos to login. This 
is useful
       if the history server is accessing HDFS files on a secure Hadoop 
cluster. If this is 
-      true it looks uses the configs 
<code>spark.history.kerberos.principal</code> and
+      true, it uses the configs <code>spark.history.kerberos.principal</code> 
and
       <code>spark.history.kerberos.keytab</code>. 
     </td>
   </tr>

http://git-wip-us.apache.org/repos/asf/spark/blob/886508d3/docs/streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/streaming-programming-guide.md 
b/docs/streaming-programming-guide.md
index 90a0eef..7b8b793 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -939,7 +939,7 @@ Receiving multiple data streams can therefore be achieved 
by creating multiple i
 and configuring them to receive different partitions of the data stream from 
the source(s).
 For example, a single Kafka input stream receiving two topics of data can be 
split into two
 Kafka input streams, each receiving only one topic. This would run two 
receivers on two workers,
-thus allowing data to received in parallel, and increasing overall throughput.
+thus allowing data to be received in parallel, and increasing overall 
throughput.
 
 Another parameter that should be considered is the receiver's blocking 
interval. For most receivers,
 the received data is coalesced together into large blocks of data before 
storing inside Spark's memory.
@@ -980,7 +980,7 @@ If the number of tasks launched per second is high (say, 50 
or more per second),
 of sending out tasks to the slaves maybe significant and will make it hard to 
achieve sub-second
 latencies. The overhead can be reduced by the following changes:
 
-* **Task Serialization**: Using Kryo serialization for serializing tasks can 
reduced the task
+* **Task Serialization**: Using Kryo serialization for serializing tasks can 
reduce the task
   sizes, and therefore reduce the time taken to send them to the slaves.
 
 * **Execution mode**: Running Spark in Standalone mode or coarse-grained Mesos 
mode leads to

Reply via email to