Github user jaceklaskowski commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12569#discussion_r60577867
  
    --- Diff: docs/programming-guide.md ---
    @@ -1328,12 +1328,18 @@ value of the broadcast variable (e.g. if the 
variable is shipped to a new node l
     Accumulators are variables that are only "added" to through an associative 
and commutative operation and can
     therefore be efficiently supported in parallel. They can be used to 
implement counters (as in
     MapReduce) or sums. Spark natively supports accumulators of numeric types, 
and programmers
    -can add support for new types. If accumulators are created with a name, 
they will be
    +can add support for new types.
    +
    +If accumulators are created with a name, they will be
     displayed in Spark's UI. This can be useful for understanding the progress 
of
     running stages (NOTE: this is not yet supported in Python).
     
    +<p style="text-align: center;">
    --- End diff --
    
    I copied it from another file 
https://github.com/apache/spark/blob/master/docs/cluster-overview.md as I 
didn't know how to include images.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to