Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15852#discussion_r88357129
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CompactibleFileStreamLog.scala
 ---
    @@ -63,7 +63,60 @@ abstract class CompactibleFileStreamLog[T <: AnyRef : 
ClassTag](
     
       protected def isDeletingExpiredLog: Boolean
     
    -  protected def compactInterval: Int
    +  protected def defaultCompactInterval: Int
    +
    +  protected final lazy val compactInterval: Int = {
    +    // SPARK-18187: "compactInterval" can be set by user via 
defaultCompactInterval.
    +    // If there are existing log entries, then we should ensure a 
compatible compactInterval
    +    // is used, irrespective of the defaultCompactInterval. There are 
three cases:
    +    //
    +    // 1. If there is no '.compact' file, we can use the default setting 
directly.
    +    // 2. If there are two or more '.compact' files, we use the interval 
of patch id suffix with
    +    // '.compact' as compactInterval. It is unclear whether this case will 
ever happen in the
    +    // current code, since only the latest '.compact' file is retained 
i.e., other are garbage
    +    // collected.
    +    // 3. If there is only one '.compact' file, then we must find a 
compact interval
    +    // that is compatible with (i.e., a divisor of) the previous compact 
file, and that
    +    // faithfully tries to represent the revised default compact interval 
i.e., is at least
    +    // is large if possible.
    +    // e.g., if defaultCompactInterval is 5 (and previous compact interval 
could have
    +    // been any 2,3,4,6,12), then a log could be: 11.compact, 12, 13, in 
which case
    +    // will ensure that the new compactInterval = 6 > 5 and (11 + 1) % 6 
== 0
    +    val compactibleBatchIds = fileManager.list(metadataPath, 
batchFilesFilter)
    +      .filter(f => 
f.getPath.toString.endsWith(CompactibleFileStreamLog.COMPACT_FILE_SUFFIX))
    +      .map(f => pathToBatchId(f.getPath))
    +      .sorted
    +      .reverse
    +
    +    // Case 1
    +    var interval = defaultCompactInterval
    +    if (compactibleBatchIds.length >= 2) {
    +      // Case 2
    +      val latestCompactBatchId = compactibleBatchIds(0)
    +      val previousCompactBatchId = compactibleBatchIds(1)
    +      interval = (latestCompactBatchId - previousCompactBatchId).toInt
    +      logInfo(s"Compact interval case 2 = $interval")
    +    } else if (compactibleBatchIds.length == 1) {
    +      // Case 3
    +      val latestCompactBatchId = compactibleBatchIds(0).toInt
    +      if (latestCompactBatchId + 1 <= defaultCompactInterval) {
    +        interval = latestCompactBatchId + 1
    +      } else if (defaultCompactInterval < (latestCompactBatchId + 1) / 2) {
    +        // Find the first divisor >= default compact interval
    +        def properDivisors(n: Int, min: Int) =
    +          (min to n/2).filter(i => n % i == 0) :+ n
    +
    --- End diff --
    
    I would use the following codes to avoid generating the real number 
sequence:
    
    ```Scala
    def properDivisors(n: Int, min: Int) = (min to n/2).view.filter(n % _ == 0) 
:+ n
    
    interval = properDivisors(latestCompactBatchId + 1, 
defaultCompactInterval).head
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to