[ 
https://issues.apache.org/jira/browse/SPARK-36242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-36242:
----------------------------------
    Fix Version/s: 3.1.3

> Ensure spill file closed before set success to true in 
> ExternalSorter.spillMemoryIteratorToDisk method
> ------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-36242
>                 URL: https://issues.apache.org/jira/browse/SPARK-36242
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.3.0
>            Reporter: Yang Jie
>            Assignee: Yang Jie
>            Priority: Minor
>             Fix For: 3.2.0, 3.1.3, 3.3.0
>
>
> The processes of ExternalSorter.spillMemoryIteratorToDisk and 
> ExternalAppendOnlyMap.spillMemoryIteratorToDisk are similar, but there are 
> some differences in setting `success = true`
>  
> Code of ExternalSorter.spillMemoryIteratorToDisk as follows:
>  
> {code:java}
>       if (objectsWritten > 0) {
>         flush()
>       } else {
>         writer.revertPartialWritesAndClose()
>       }
>       success = true
>     } finally {
>       if (success) {
>         writer.close()
>       } else {
>         ...
>       }
>     }{code}
> Code of ExternalSorter.spillMemoryIteratorToDisk as follows:
> {code:java}
>   if (objectsWritten > 0) {
>     flush()
>     writer.close()
>   } else {
>     writer.revertPartialWritesAndClose()
>   }
>   success = true
> } finally {
>   if (!success) {
>     ...
>   }
> }{code}
> It seems that the processing of `ExternalSorter.spillMemoryIteratorToDisk` 
> mehod is more reasonable, We should make sure setting `success = true` after 
> the spill file is closed
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to