Repository: spark
Updated Branches:
  refs/heads/branch-0.9 69fc97df0 -> 19cf2f73e


Fixed typo on Spark quick-start docs.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/19cf2f73
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/19cf2f73
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/19cf2f73

Branch: refs/heads/branch-0.9
Commit: 19cf2f73e2e8c2dc50fb8b9ff5d5e1da9e9800a6
Parents: 69fc97d
Author: Tathagata Das <tathagata.das1...@gmail.com>
Authored: Mon Apr 7 18:27:46 2014 -0700
Committer: Tathagata Das <tathagata.das1...@gmail.com>
Committed: Mon Apr 7 18:27:46 2014 -0700

----------------------------------------------------------------------
 docs/quick-start.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/19cf2f73/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 13df6be..60e8b1b 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -124,7 +124,7 @@ object SimpleApp {
 }
 {% endhighlight %}
 
-This program just counts the number of lines containing 'a' and the number 
containing 'b' in the Spark README. Note that you'll need to replace 
$YOUR_SPARK_HOME with the location where Spark is installed. Unlike the earlier 
examples with the Spark shell, which initializes its own SparkContext, we 
initialize a SparkContext as part of the proogram. We pass the SparkContext 
constructor four arguments, the type of scheduler we want to use (in this case, 
a local scheduler), a name for the application, the directory where Spark is 
installed, and a name for the jar file containing the application's code. The 
final two arguments are needed in a distributed setting, where Spark is running 
across several nodes, so we include them for completeness. Spark will 
automatically ship the jar files you list to slave nodes.
+This program just counts the number of lines containing 'a' and the number 
containing 'b' in the Spark README. Note that you'll need to replace 
$YOUR_SPARK_HOME with the location where Spark is installed. Unlike the earlier 
examples with the Spark shell, which initializes its own SparkContext, we 
initialize a SparkContext as part of the program. We pass the SparkContext 
constructor four arguments, the type of scheduler we want to use (in this case, 
a local scheduler), a name for the application, the directory where Spark is 
installed, and a name for the jar file containing the application's code. The 
final two arguments are needed in a distributed setting, where Spark is running 
across several nodes, so we include them for completeness. Spark will 
automatically ship the jar files you list to slave nodes.
 
 This file depends on the Spark API, so we'll also include an sbt configuration 
file, `simple.sbt` which explains that Spark is a dependency. This file also 
adds a repository that Spark depends on:
 

Reply via email to