Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17591#discussion_r110692833
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileStatusCache.scala
 ---
    @@ -94,27 +94,46 @@ private class SharedInMemoryCache(maxSizeInBytes: Long) 
extends Logging {
       // Opaque object that uniquely identifies a shared cache user
       private type ClientId = Object
     
    +
       private val warnedAboutEviction = new AtomicBoolean(false)
     
       // we use a composite cache key in order to distinguish entries inserted 
by different clients
    -  private val cache: Cache[(ClientId, Path), Array[FileStatus]] = 
CacheBuilder.newBuilder()
    -    .weigher(new Weigher[(ClientId, Path), Array[FileStatus]] {
    -      override def weigh(key: (ClientId, Path), value: Array[FileStatus]): 
Int = {
    -        (SizeEstimator.estimate(key) + SizeEstimator.estimate(value)).toInt
    -      }})
    -    .removalListener(new RemovalListener[(ClientId, Path), 
Array[FileStatus]]() {
    -      override def onRemoval(removed: RemovalNotification[(ClientId, 
Path), Array[FileStatus]])
    +  private val cache: Cache[(ClientId, Path), Array[FileStatus]] = {
    +    /* [[Weigher]].weigh returns Int so we could only cache objects < 2GB
    +     * instead, the weight is divided by this factor (which is smaller
    +     * than the size of one [[FileStatus]]).
    +     * so it will support objects up to 64GB in size.
    +     */
    +    val weightScale = 32
    +    CacheBuilder.newBuilder()
    +      .weigher(new Weigher[(ClientId, Path), Array[FileStatus]] {
    +        override def weigh(key: (ClientId, Path), value: 
Array[FileStatus]): Int = {
    +          val estimate = (SizeEstimator.estimate(key) + 
SizeEstimator.estimate(value)) / weightScale
    +          if (estimate > Int.MaxValue) {
    +            logWarning(s"Cached table partition metadata size is too big. 
Approximating to " +
    +              s"${Int.MaxValue.toLong * weightScale}.")
    +            Int.MaxValue
    +          } else {
    +            estimate.toInt
    +          }
    +        }
    +      })
    +      .removalListener(new RemovalListener[(ClientId, Path), 
Array[FileStatus]]() {
    --- End diff --
    
    This is kinda hard to read. Can we just initialize the weighter and the 
listener in separate variables?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to