Repository: spark
Updated Branches:
  refs/heads/master c34b546d6 -> e28a8c589


[SPARK-17089][DOCS] Remove api doc link for mapReduceTriplets operator

## What changes were proposed in this pull request?

Remove the api doc link for mapReduceTriplets operator because in latest api 
they are remove so when user link to that api they will not get 
mapReduceTriplets there so its more good to remove than confuse the user.

## How was this patch tested?
Run all the test cases

![screenshot from 2016-08-16 
23-08-25](https://cloud.githubusercontent.com/assets/8075390/17709393/8cfbf75a-6406-11e6-98e6-38f7b319d833.png)

Author: sandy <phal...@gmail.com>

Closes #14669 from phalodi/SPARK-17089.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e28a8c58
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e28a8c58
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e28a8c58

Branch: refs/heads/master
Commit: e28a8c5899c48ff065e2fd3bb6b10c82b4d39c2c
Parents: c34b546
Author: sandy <phal...@gmail.com>
Authored: Tue Aug 16 12:50:55 2016 -0700
Committer: Reynold Xin <r...@databricks.com>
Committed: Tue Aug 16 12:50:55 2016 -0700

----------------------------------------------------------------------
 docs/graphx-programming-guide.md | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/e28a8c58/docs/graphx-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/graphx-programming-guide.md b/docs/graphx-programming-guide.md
index 6f738f0..58671e6 100644
--- a/docs/graphx-programming-guide.md
+++ b/docs/graphx-programming-guide.md
@@ -24,7 +24,6 @@ description: GraphX graph processing library guide for Spark 
SPARK_VERSION_SHORT
 [Graph.outerJoinVertices]: 
api/scala/index.html#org.apache.spark.graphx.Graph@outerJoinVertices[U,VD2](RDD[(VertexId,U)])((VertexId,VD,Option[U])⇒VD2)(ClassTag[U],ClassTag[VD2]):Graph[VD2,ED]
 [Graph.aggregateMessages]: 
api/scala/index.html#org.apache.spark.graphx.Graph@aggregateMessages[A]((EdgeContext[VD,ED,A])⇒Unit,(A,A)⇒A,TripletFields)(ClassTag[A]):VertexRDD[A]
 [EdgeContext]: api/scala/index.html#org.apache.spark.graphx.EdgeContext
-[Graph.mapReduceTriplets]: 
api/scala/index.html#org.apache.spark.graphx.Graph@mapReduceTriplets[A](mapFunc:org.apache.spark.graphx.EdgeTriplet[VD,ED]=&gt;Iterator[(org.apache.spark.graphx.VertexId,A)],reduceFunc:(A,A)=&gt;A,activeSetOpt:Option[(org.apache.spark.graphx.VertexRDD[_],org.apache.spark.graphx.EdgeDirection)])(implicitevidence$10:scala.reflect.ClassTag[A]):org.apache.spark.graphx.VertexRDD[A]
 [GraphOps.collectNeighborIds]: 
api/scala/index.html#org.apache.spark.graphx.GraphOps@collectNeighborIds(EdgeDirection):VertexRDD[Array[VertexId]]
 [GraphOps.collectNeighbors]: 
api/scala/index.html#org.apache.spark.graphx.GraphOps@collectNeighbors(EdgeDirection):VertexRDD[Array[(VertexId,VD)]]
 [RDD Persistence]: programming-guide.html#rdd-persistence
@@ -596,7 +595,7 @@ compute the average age of the more senior followers of 
each user.
 ### Map Reduce Triplets Transition Guide (Legacy)
 
 In earlier versions of GraphX neighborhood aggregation was accomplished using 
the
-[`mapReduceTriplets`][Graph.mapReduceTriplets] operator:
+`mapReduceTriplets` operator:
 
 {% highlight scala %}
 class Graph[VD, ED] {
@@ -607,7 +606,7 @@ class Graph[VD, ED] {
 }
 {% endhighlight %}
 
-The [`mapReduceTriplets`][Graph.mapReduceTriplets] operator takes a user 
defined map function which
+The `mapReduceTriplets` operator takes a user defined map function which
 is applied to each triplet and can yield *messages* which are aggregated using 
the user defined
 `reduce` function.
 However, we found the user of the returned iterator to be expensive and it 
inhibited our ability to


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to