This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 2fbed37  [MINOR][DOC] Add missing space after comma
2fbed37 is described below

commit 2fbed378bf9688cfca649de162f2f8e2582ac06d
Author: “attilapiros” <piros.attila.zs...@gmail.com>
AuthorDate: Mon Mar 25 15:22:07 2019 -0500

    [MINOR][DOC] Add missing space after comma
    
    Adding missing spaces after commas.
    
    Closes #24205 from attilapiros/minor-doc-changes.
    
    Authored-by: “attilapiros” <piros.attila.zs...@gmail.com>
    Signed-off-by: Sean Owen <sean.o...@databricks.com>
---
 docs/configuration.md            | 2 +-
 docs/graphx-programming-guide.md | 2 +-
 docs/mllib-statistics.md         | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/configuration.md b/docs/configuration.md
index f23dc7c..006d839 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -2296,7 +2296,7 @@ In some cases, you may want to avoid hard-coding certain 
configurations in a `Sp
 instance, Spark allows you to simply create an empty conf and set spark/spark 
hadoop properties.
 
 {% highlight scala %}
-val conf = new SparkConf().set("spark.hadoop.abc.def","xyz")
+val conf = new SparkConf().set("spark.hadoop.abc.def", "xyz")
 val sc = new SparkContext(conf)
 {% endhighlight %}
 
diff --git a/docs/graphx-programming-guide.md b/docs/graphx-programming-guide.md
index 6a0be7b..3badb0a 100644
--- a/docs/graphx-programming-guide.md
+++ b/docs/graphx-programming-guide.md
@@ -317,7 +317,7 @@ class Graph[VD, ED] {
   // Iterative graph-parallel computation 
==========================================================
   def pregel[A](initialMsg: A, maxIterations: Int, activeDirection: 
EdgeDirection)(
       vprog: (VertexId, VD, A) => VD,
-      sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexId,A)],
+      sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexId, A)],
       mergeMsg: (A, A) => A)
     : Graph[VD, ED]
   // Basic graph algorithms 
========================================================================
diff --git a/docs/mllib-statistics.md b/docs/mllib-statistics.md
index c29400a..6bf013f 100644
--- a/docs/mllib-statistics.md
+++ b/docs/mllib-statistics.md
@@ -239,7 +239,7 @@ Refer to the [`Statistics` Python 
docs](api/python/pyspark.mllib.html#pyspark.ml
 ### Streaming Significance Testing
 `spark.mllib` provides online implementations of some tests to support use 
cases
 like A/B testing. These tests may be performed on a Spark Streaming
-`DStream[(Boolean,Double)]` where the first element of each tuple
+`DStream[(Boolean, Double)]` where the first element of each tuple
 indicates control group (`false`) or treatment group (`true`) and the
 second element is the value of an observation.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to