Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r15383132
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
    @@ -20,25 +20,43 @@ package org.apache.spark.storage
     import java.nio.ByteBuffer
     import java.util.LinkedHashMap
     
    +import scala.collection.mutable
     import scala.collection.mutable.ArrayBuffer
     
     import org.apache.spark.util.{SizeEstimator, Utils}
    +import org.apache.spark.util.collection.SizeTrackingVector
     
     private case class MemoryEntry(value: Any, size: Long, deserialized: 
Boolean)
     
     /**
    - * Stores blocks in memory, either as ArrayBuffers of deserialized Java 
objects or as
    + * Stores blocks in memory, either as Arrays of deserialized Java objects 
or as
      * serialized ByteBuffers.
      */
    -private class MemoryStore(blockManager: BlockManager, maxMemory: Long)
    +private[spark] class MemoryStore(blockManager: BlockManager, maxMemory: 
Long)
       extends BlockStore(blockManager) {
     
    +  private val conf = blockManager.conf
       private val entries = new LinkedHashMap[BlockId, MemoryEntry](32, 0.75f, 
true)
    +
       @volatile private var currentMemory = 0L
    +
       // Object used to ensure that only one thread is putting blocks and if 
necessary, dropping
       // blocks from the memory store.
       private val putLock = new Object()
    --- End diff --
    
    Maybe rename this to accountingLock, since we also use it to guard access 
to unrollMemoryMap


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to