Github user CrazyJvm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/766#discussion_r12670482
  
    --- Diff: docs/streaming-programming-guide.md ---
    @@ -522,9 +522,9 @@ common ones are as follows.
       <td> <b>reduceByKey</b>(<i>func</i>, [<i>numTasks</i>]) </td>
       <td> When called on a DStream of (K, V) pairs, return a new DStream of 
(K, V) pairs where the
       values for each key are aggregated using the given reduce function. 
<b>Note:</b> By default,
    -  this uses Spark's default number of parallel tasks (2 for local machine, 
8 for a cluster) to
    -  do the grouping. You can pass an optional <code>numTasks</code> argument 
to set a different
    -  number of tasks.</td>
    +  this uses Spark's default number of parallel tasks (local mode is 2, 
while cluster mode is
    --- End diff --
    
    it's good i think : )


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to