GitHub user juliuszsompolski opened a pull request:

    https://github.com/apache/spark/pull/19915

    [SPARK-22721] BytesToBytesMap peak memory usage not accurate after reset()

    ## What changes were proposed in this pull request?
    
    BytesToBytesMap doesn't update peak memory usage before shrinking back to 
initial capacity in reset(), so after a disk spill one never knows what was the 
size of hash table was before spilling.
    
    ## How was this patch tested?
    
    Checked manually.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/juliuszsompolski/apache-spark SPARK-22721

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19915.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19915
    
----
commit eeb0f31da08e6bf609b8ca6cc6509b949dcbac6e
Author: Juliusz Sompolski <ju...@databricks.com>
Date:   2017-12-06T20:25:35Z

    SPARK-22721

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to