Github user gf53520 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17124#discussion_r103734243
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
 ---
    @@ -282,8 +282,12 @@ private[state] class HDFSBackedStateStoreProvider(
           // target file will break speculation, skipping the rename step is 
the only choice. It's still
           // semantically correct because Structured Streaming requires 
rerunning a batch should
           // generate the same output. (SPARK-19677)
    +      // Also, a tmp file of delta file that generated by the first batch 
after restart
    +      // streaming job is still reserved on HDFS. (SPARK-19779)
           // scalastyle:on
    -      if (!fs.exists(finalDeltaFile) && !fs.rename(tempDeltaFile, 
finalDeltaFile)) {
    +      if (fs.exists(finalDeltaFile)) {
    +        fs.delete(tempDeltaFile, true)
    +      } else if (!fs.rename(tempDeltaFile, finalDeltaFile)) {
    --- End diff --
    
    when restart streaming job , the`finalDeltaFile`  is same to a 
`finalDeltaFile` generated last batch of streaming job before restart. So here 
don't need rename to create an same file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to