This is an automated email from the ASF dual-hosted git repository.

kabhwan pushed a commit to branch branch-3.5
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.5 by this push:
     new 3f4425b4880d [SPARK-47036][SS][3.5] Cleanup RocksDB file tracking for 
previously uploaded files if files were deleted from local directory
3f4425b4880d is described below

commit 3f4425b4880dfe3e494a894da18b412ecdba4fb1
Author: Bhuwan Sahni <bhuwan.sa...@databricks.com>
AuthorDate: Thu Feb 22 10:59:15 2024 +0900

    [SPARK-47036][SS][3.5] Cleanup RocksDB file tracking for previously 
uploaded files if files were deleted from local directory
    
    Backports PR https://github.com/apache/spark/pull/45092 to Spark 3.5
    
    ### What changes were proposed in this pull request?
    
    This change cleans up any dangling files tracked as being previously 
uploaded if they were cleaned up from the filesystem. The cleaning can happen 
due to a compaction racing in parallel with commit, where compaction completes 
after commit and a older version is loaded on the same executor.
    
    ### Why are the changes needed?
    
    The changes are needed to prevent RocksDB versionId mismatch errors (which 
require users to clean the checkpoint directory and retry the query).
    
    A particular scenario where this can happen is provided below:
    
    1. Version V1 is loaded on executor A, RocksDB State Store has 195.sst, 
196.sst, 197.sst and 198.sst files.
    2. State changes are made, which result in creation of a new table file 
200.sst.
    3. State store is committed as version V2. The SST file 200.sst (as 
000200-8c80161a-bc23-4e3b-b175-cffe38e427c7.sst) is uploaded to DFS, and 
previous 4 files are reused. A new metadata file is created to track the exact 
SST files with unique IDs, and uploaded with RocksDB Manifest as part of V1.zip.
    4. Rocks DB compaction is triggered at the same time. The compaction 
creates a new L1 file (201.sst), and deletes existing 5 SST files.
    5. Spark Stage is retried.
    6. Version V1 is reloaded on the same executor. The local files are 
inspected, and 201.sst is deleted. The 4 SST files in version V1 are downloaded 
again to local file system.
    7. Any local files which are deleted (as part of version load) are also 
removed from local → DFS file upload tracking. **However, the files already 
deleted as a result of compaction are not removed from tracking. This is the 
bug which resulted in the failure.**
    8. State store is committed as version V1. However, the local mapping of 
SST files to DFS file path still has 200.sst in its tracking, hence the SST 
file is not re-uploaded. A new metadata file is created to track the exact SST 
files with unique IDs, and uploaded with the new RocksDB Manifest as part of 
V2.zip. (The V2.zip file is overwritten here atomically)
    9. A new executor tried to load version V2. However, the SST files in (1) 
are now incompatible with Manifest file in (6) resulting in the version Id 
mismatch failure.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    Added unit test cases to cover the scenario where some files were deleted 
on the file system.
    
    The test case fails with the existing master with error `Mismatch in unique 
ID on table file 16`, and is successful with changes in this PR.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No
    
    Closes #45206 from sahnib/spark-3.5-rocks-db-fix.
    
    Authored-by: Bhuwan Sahni <bhuwan.sa...@databricks.com>
    Signed-off-by: Jungtaek Lim <kabhwan.opensou...@gmail.com>
---
 .../streaming/state/RocksDBFileManager.scala       | 41 +++++++---
 .../execution/streaming/state/RocksDBSuite.scala   | 91 +++++++++++++++++++++-
 2 files changed, 119 insertions(+), 13 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala
index 300a3b8137b4..3089de7127e7 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala
@@ -496,16 +496,12 @@ class RocksDBFileManager(
       s" DFS for version $version. $filesReused files reused without copying.")
     versionToRocksDBFiles.put(version, immutableFiles)
 
-    // clean up deleted SST files from the localFilesToDfsFiles Map
-    val currentLocalFiles = localFiles.map(_.getName).toSet
-    val mappingsToClean = localFilesToDfsFiles.asScala
-      .keys
-      .filterNot(currentLocalFiles.contains)
-
-    mappingsToClean.foreach { f =>
-      logInfo(s"cleaning $f from the localFilesToDfsFiles map")
-      localFilesToDfsFiles.remove(f)
-    }
+    // Cleanup locally deleted files from the localFilesToDfsFiles map
+    // Locally, SST Files can be deleted due to RocksDB compaction. These 
files need
+    // to be removed rom the localFilesToDfsFiles map to ensure that if a 
older version
+    // regenerates them and overwrites the version.zip, SST files from the 
conflicting
+    // version (previously committed) are not reused.
+    removeLocallyDeletedSSTFilesFromDfsMapping(localFiles)
 
     saveCheckpointMetrics = RocksDBFileManagerMetrics(
       bytesCopied = bytesCopied,
@@ -523,8 +519,18 @@ class RocksDBFileManager(
   private def loadImmutableFilesFromDfs(
       immutableFiles: Seq[RocksDBImmutableFile], localDir: File): Unit = {
     val requiredFileNameToFileDetails = immutableFiles.map(f => 
f.localFileName -> f).toMap
+
+    val localImmutableFiles = listRocksDBFiles(localDir)._1
+
+    // Cleanup locally deleted files from the localFilesToDfsFiles map
+    // Locally, SST Files can be deleted due to RocksDB compaction. These 
files need
+    // to be removed rom the localFilesToDfsFiles map to ensure that if a 
older version
+    // regenerates them and overwrites the version.zip, SST files from the 
conflicting
+    // version (previously committed) are not reused.
+    removeLocallyDeletedSSTFilesFromDfsMapping(localImmutableFiles)
+
     // Delete unnecessary local immutable files
-    listRocksDBFiles(localDir)._1
+    localImmutableFiles
       .foreach { existingFile =>
         val requiredFile = 
requiredFileNameToFileDetails.get(existingFile.getName)
         val prevDfsFile = 
localFilesToDfsFiles.asScala.get(existingFile.getName)
@@ -582,6 +588,19 @@ class RocksDBFileManager(
       filesReused = filesReused)
   }
 
+  private def removeLocallyDeletedSSTFilesFromDfsMapping(localFiles: 
Seq[File]): Unit = {
+    // clean up deleted SST files from the localFilesToDfsFiles Map
+    val currentLocalFiles = localFiles.map(_.getName).toSet
+    val mappingsToClean = localFilesToDfsFiles.asScala
+      .keys
+      .filterNot(currentLocalFiles.contains)
+
+    mappingsToClean.foreach { f =>
+      logInfo(s"cleaning $f from the localFilesToDfsFiles map")
+      localFilesToDfsFiles.remove(f)
+    }
+  }
+
   /** Get the SST files required for a version from the version zip file in 
DFS */
   private def getImmutableFilesFromVersionZip(version: Long): 
Seq[RocksDBImmutableFile] = {
     Utils.deleteRecursively(localTempDir)
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala
index 04b11dfe43f0..16bfe2359f43 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala
@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.streaming.state
 import java.io._
 import java.nio.charset.Charset
 
+import scala.collection.mutable
 import scala.language.implicitConversions
 
 import org.apache.commons.io.FileUtils
@@ -1452,6 +1453,88 @@ class RocksDBSuite extends 
AlsoTestWithChangelogCheckpointingEnabled with Shared
     }
   }
 
+  test("ensure local files deleted on filesystem" +
+    " are cleaned from dfs file mapping") {
+    def getSSTFiles(dir: File): Set[File] = {
+      val sstFiles = new mutable.HashSet[File]()
+      dir.listFiles().foreach { f =>
+        if (f.isDirectory) {
+          sstFiles ++= getSSTFiles(f)
+        } else {
+          if (f.getName.endsWith(".sst")) {
+            sstFiles.add(f)
+          }
+        }
+      }
+      sstFiles.toSet
+    }
+
+    def filterAndDeleteSSTFiles(dir: File, filesToKeep: Set[File]): Unit = {
+      dir.listFiles().foreach { f =>
+        if (f.isDirectory) {
+          filterAndDeleteSSTFiles(f, filesToKeep)
+        } else {
+          if (!filesToKeep.contains(f) && f.getName.endsWith(".sst")) {
+            logInfo(s"deleting ${f.getAbsolutePath} from local directory")
+            f.delete()
+          }
+        }
+      }
+    }
+
+    withTempDir { dir =>
+      withTempDir { localDir =>
+        val sqlConf = new SQLConf()
+        val dbConf = RocksDBConf(StateStoreConf(sqlConf))
+        logInfo(s"config set to ${dbConf.compactOnCommit}")
+        val hadoopConf = new Configuration()
+        val remoteDir = dir.getCanonicalPath
+        withDB(remoteDir = remoteDir,
+          conf = dbConf,
+          hadoopConf = hadoopConf,
+          localDir = localDir) { db =>
+          db.load(0)
+          db.put("a", "1")
+          db.put("b", "1")
+          db.commit()
+          db.doMaintenance()
+
+          // find all SST files written in version 1
+          val sstFiles = getSSTFiles(localDir)
+
+          // make more commits, this would generate more SST files and write
+          // them to remoteDir
+          for (version <- 1 to 10) {
+            db.load(version)
+            db.put("c", "1")
+            db.put("d", "1")
+            db.commit()
+            db.doMaintenance()
+          }
+
+          // clean the SST files committed after version 1 from local
+          // filesystem. This is similar to what a process like compaction
+          // where multiple L0 SST files can be merged into a single L1 file
+          filterAndDeleteSSTFiles(localDir, sstFiles)
+
+          // reload 2, and overwrite commit for version 3, this should not
+          // reuse any locally deleted files as they should be removed from 
the mapping
+          db.load(2)
+          db.put("e", "1")
+          db.put("f", "1")
+          db.commit()
+          db.doMaintenance()
+
+          // clean local state
+          db.load(0)
+
+          // reload version 3, should be successful
+          db.load(3)
+        }
+      }
+    }
+  }
+
   private def sqlConf = SQLConf.get.clone()
 
   private def dbConf = RocksDBConf(StateStoreConf(sqlConf))
@@ -1460,12 +1543,16 @@ class RocksDBSuite extends 
AlsoTestWithChangelogCheckpointingEnabled with Shared
       remoteDir: String,
       version: Int = 0,
       conf: RocksDBConf = dbConf,
-      hadoopConf: Configuration = new Configuration())(
+      hadoopConf: Configuration = new Configuration(),
+      localDir: File = Utils.createTempDir())(
       func: RocksDB => T): T = {
     var db: RocksDB = null
     try {
       db = new RocksDB(
-        remoteDir, conf = conf, hadoopConf = hadoopConf,
+        remoteDir,
+        conf = conf,
+        localRootDir = localDir,
+        hadoopConf = hadoopConf,
         loggingId = s"[Thread-${Thread.currentThread.getId}]")
       db.load(version)
       func(db)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to