[ 
https://issues.apache.org/jira/browse/SPARK-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or resolved SPARK-10543.
-------------------------------
          Resolution: Fixed
            Assignee: Sen Fang
       Fix Version/s: 1.5.1
                      1.6.0
    Target Version/s: 1.6.0, 1.5.1

> Peak Execution Memory Quantile should be Per-task Basis
> -------------------------------------------------------
>
>                 Key: SPARK-10543
>                 URL: https://issues.apache.org/jira/browse/SPARK-10543
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.0
>            Reporter: Sen Fang
>            Assignee: Sen Fang
>            Priority: Minor
>             Fix For: 1.6.0, 1.5.1
>
>
> Currently the Peak Execution Memory quantiles seem to be cumulative rather 
> than per task basis. For example, I have seen a value of 2TB in one of my 
> jobs on the quantile metric but each individual task shows less than 1GB on 
> the bottom table.
> [~andrewor14] In your PR https://github.com/apache/spark/pull/7770, the 
> screenshot shows the Max Peak Execution Memory of 792.5KB while in the bottom 
> it's about 50KB per task (unless your workload is skewed)
> The fix seems straightforward that we use the `update` rather than `value` 
> from the accumulable. I'm happy to provide a PR if people agree this is the 
> right behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to