HeartSaVioR commented on a change in pull request #22952: [SPARK-20568][SS] 
Provide option to clean up completed files in streaming query
URL: https://github.com/apache/spark/pull/22952#discussion_r347673657
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
 ##########
 @@ -330,4 +341,96 @@ object FileStreamSource {
 
     def size: Int = map.size()
   }
+
+  private[sql] trait FileStreamSourceCleaner {
+    def clean(entry: FileEntry): Unit
+  }
+
+  private[sql] object FileStreamSourceCleaner {
+    def apply(
+        fileSystem: FileSystem,
+        sourcePath: Path,
+        option: FileStreamOptions,
+        hadoopConf: Configuration): Option[FileStreamSourceCleaner] = 
option.cleanSource match {
+      case CleanSourceMode.ARCHIVE =>
+        require(option.sourceArchiveDir.isDefined)
+        val path = new Path(option.sourceArchiveDir.get)
+        val archiveFs = path.getFileSystem(hadoopConf)
+        val qualifiedArchivePath = archiveFs.makeQualified(path)
+        Some(new SourceFileArchiver(fileSystem, sourcePath, archiveFs, 
qualifiedArchivePath))
+
+      case CleanSourceMode.DELETE =>
+        Some(new SourceFileRemover(fileSystem))
+
+      case _ => None
+    }
+  }
+
+  private[sql] class SourceFileArchiver(
+      fileSystem: FileSystem,
+      sourcePath: Path,
+      baseArchiveFileSystem: FileSystem,
+      baseArchivePath: Path) extends FileStreamSourceCleaner with Logging {
+    assertParameters()
+
+    private def assertParameters(): Unit = {
+      require(fileSystem.getUri == baseArchiveFileSystem.getUri, "Base archive 
path is located " +
+        s"on a different file system than the source files. source path: 
$sourcePath" +
+        s" / base archive path: $baseArchivePath")
+
+      /**
+       * FileStreamSource reads the files which one of below conditions is met:
+       * 1) file itself is matched with source path
+       * 2) parent directory is matched with source path
 
 Review comment:
   @zsxwing 
   Thanks for spending your time to revisit this! The condition is based on the 
test suite in FileStreamSource, but for partitioned paths, yes that's missed. 
Nice finding. I need to update the condition, or just remove the condition 
documented there at all.
   
   For `recursiveFileLookup`, it came later than the patch and I missed it. The 
condition was formed in early this year, and recursiveFileLookup seemed to come 
in mid this year.
   
   Adding two cases, FileStreamSource can read any files under the source path, 
which invalidates the depth check. There're three options to deal with this:
   
   1) No pattern check and just try to rename. Log it if it fails to rename.
   2) Disallow any path to be used as base archive path if the path matches the 
source path (glob). After then we don't need to check the pattern.
   3) Do pattern check before renaming, though it needs checking pattern per 
file. We may optimize this a bit via grouping files per directory and check the 
pattern with directory instead of individual files.
   
   Which one (or couple of) would be the preferred approach?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to