This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 2c2d6534beb [SPARK-44582][SQL] Skip iterator on SMJ if it was cleaned 
up
2c2d6534beb is described below

commit 2c2d6534bebed3c7bfa0842b84aa27674b721410
Author: Kun Wan <wan...@apache.org>
AuthorDate: Fri Aug 4 14:24:02 2023 +0900

    [SPARK-44582][SQL] Skip iterator on SMJ if it was cleaned up
    
    ### What changes were proposed in this pull request?
    
    Bugfix for SMJ which may cause JVM crash.
    
    **When will the JVM crash**
    
    ```
    Query pattern:
    
    TableScan     TableScan
       |              |
     Exchange      Exchange
       |              |
      Sort 1         Sort 2
       |              |
     Window 1      Window 2
        \          /
          \      /
            SMJ
             |
             |
      WriteFileCommand
    ```
    
    1. WriteFileCommand call hasNext() to check if the input is empty.
    2. SMJ call findNextJoinRows() to find all matched rows.
    2.1 SMJ tries to get the first row in the left child.
    2.1.1 Sort 1 will sort all the input rows in the Offheap memory.
    2.1.2 Window 1 will read one group data and the first row in next group 
(named X), return the first row in the first group.
    2.2 SMJ tries to get the first row in the right child.
    2.2.1 Sort 2 and Window 2 are empty, do nothing.
    2.3  Inner SMJ will finish, since there will definitely be no join rows, 
call earlyCleanupResources() to free offHeap memory.
    3. WriteFileCommand call hasNext() again to write the input data to the 
files.
    4. SMJ call findNextJoinRows() to find all matched rows.
    4.1 SMJ tries to get the first row in the left child.
    4.2 Window 1 tries to add row X into the group buffer, which will accesse 
unallocated memory, the JVM may or may not crash.
    
    In this PR, if SMJ has already been cleaned up, skip iterator on it.
    
    ### Why are the changes needed?
    
    Bugfix for SMJ.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    Test in our production environment.
    For unsafe API, when read the unallocated memory, the program may get the 
old value, or get a unexpected value, or cause the JVM crash.
    I don't think the UIT will be stable.
    
    The JVM crash stack
    ```
    Stack: [0x00007f8a03800000,0x00007f8a04000000],  sp=0x00007f8a03ffd620,  
free space=8181k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
code)
    v  ~StubRoutines::jint_disjoint_arraycopy
    J 36127 C2 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.add(Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;)V
 (188 bytes)  0x00007f966187ac9f [0x00007f966187a820+0x47f]
    J 36146 C2 
org.apache.spark.sql.execution.window.WindowExec$$anon$1.next()Ljava/lang/Object;
 (5 bytes)  0x00007f9661a8eefc [0x00007f9661a8dd60+0x119c]
    J 36153 C2 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage4.processNext()V
 (381 bytes)  0x00007f966180185c [0x00007f9661801760+0xfc]
    J 36246 C2 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.smj_findNextJoinRows_0$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage7;Lscala/collection/Iterator;Lscala/collection/Iterator;)Z
 (392 bytes)  0x00007f96607388f0 [0x00007f96607381e0+0x710]
    J 36249 C1 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext()V
 (109 bytes)  0x00007f965fa8ee64 [0x00007f965fa8e560+0x904]
    J 35645 C2 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$2.hasNext()Z (31 
bytes)  0x00007f965fbc58e4 [0x00007f965fbc58a0+0x44]
    j  
org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(Lscala/collection/Iterator;Lorg/apache/spark/sql/execution/datasources/FileFormatDataWriter;)Lorg/apache/spark/sql/execution/datasources/WriteTaskResult;+1
    j  
org.apache.spark.sql.execution.datasources.FileFormatWriter$$$Lambda$4398.apply()Ljava/lang/Object;+8
    j  
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Lscala/Function0;Lscala/Function0;Lscala/Function0;)Ljava/lang/Object;+4
    j  
org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(Lorg/apache/spark/sql/execution/datasources/WriteJobDescription;JIIILorg/apache/spark/internal/io/FileCommitProtocol;ILscala/collection/Iterator;)Lorg/apache/spark/sql/execution/datasources/WriteTaskResult;+258
    J 30523 C1 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$23(Lorg/apache/spark/sql/execution/datasources/WriteJobDescription;JLorg/apache/spark/internal/io/FileCommitProtocol;Lscala/runtime/IntRef;Lscala/collection/immutable/Map;Lorg/apache/spark/TaskContext;Lscala/collection/Iterator;)Lorg/apache/spark/sql/execution/datasources/WriteTaskResult;
 (61 bytes)  0x00007f966066b004 [0x00007f966066a7a0+0x864]
    J 30529 C1 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$$Lambda$3569.apply(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
 (32 bytes)  0x00007f965f79bd1c [0x00007f965f79baa0+0x27c]
    J 29322 C1 
org.apache.spark.scheduler.ResultTask.runTask(Lorg/apache/spark/TaskContext;)Ljava/lang/Object;
 (210 bytes)  0x00007f966094bd0c [0x00007f96609497a0+0x256c]
    J 24071 C1 
org.apache.spark.scheduler.Task.run(JILorg/apache/spark/metrics/MetricsSystem;Lscala/collection/immutable/Map;)Ljava/lang/Object;
 (536 bytes)  0x00007f965fca493c [0x00007f965fca1000+0x393c]
    J 23198 C1 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Lorg/apache/spark/executor/Executor$TaskRunner;Lscala/runtime/BooleanRef;)Ljava/lang/Object;
 (43 bytes)  0x00007f965f86373c [0x00007f965f8634e0+0x25c]
    J 23196 C1 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$984.apply()Ljava/lang/Object;
 (12 bytes)  0x00007f965f860e44 [0x00007f965f860dc0+0x84]
    ```
    
    Closes #42206 from wankunde/smj_cleanup.
    
    Authored-by: Kun Wan <wan...@apache.org>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 .../sql/execution/joins/SortMergeJoinExec.scala    | 24 ++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
index 0241f683d69..8d49b1558d6 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
@@ -556,14 +556,18 @@ case class SortMergeJoinExec(
 
     val doJoin = joinType match {
       case _: InnerLike =>
+        val cleanedFlag =
+          ctx.addMutableState(CodeGenerator.JAVA_BOOLEAN, "cleanedFlag", v => 
s"$v = false;")
         codegenInner(findNextJoinRows, beforeLoop, iterator, bufferedRow, 
condCheck, outputRow,
-          eagerCleanup)
+          eagerCleanup, cleanedFlag)
       case LeftOuter | RightOuter =>
         codegenOuter(streamedInput, findNextJoinRows, beforeLoop, iterator, 
bufferedRow, condCheck,
           ctx.freshName("hasOutputRow"), outputRow, eagerCleanup)
       case LeftSemi =>
+        val cleanedFlag =
+          ctx.addMutableState(CodeGenerator.JAVA_BOOLEAN, "cleanedFlag", v => 
s"$v = false;")
         codegenSemi(findNextJoinRows, beforeLoop, iterator, bufferedRow, 
condCheck,
-          ctx.freshName("hasOutputRow"), outputRow, eagerCleanup)
+          ctx.freshName("hasOutputRow"), outputRow, eagerCleanup, cleanedFlag)
       case LeftAnti =>
         codegenAnti(streamedInput, findNextJoinRows, beforeLoop, iterator, 
bufferedRow, condCheck,
           loadStreamed, ctx.freshName("hasMatchedRow"), outputRow, 
eagerCleanup)
@@ -606,8 +610,13 @@ case class SortMergeJoinExec(
       bufferedRow: String,
       conditionCheck: String,
       outputRow: String,
-      eagerCleanup: String): String = {
+      eagerCleanup: String,
+      cleanedFlag: String): String = {
     s"""
+       |if($cleanedFlag) {
+       |  return;
+       |}
+       |
        |while ($findNextJoinRows) {
        |  $beforeLoop
        |  while ($matchIterator.hasNext()) {
@@ -617,6 +626,7 @@ case class SortMergeJoinExec(
        |  }
        |  if (shouldStop()) return;
        |}
+       |$cleanedFlag = true;
        |$eagerCleanup
      """.stripMargin
   }
@@ -665,8 +675,13 @@ case class SortMergeJoinExec(
       conditionCheck: String,
       hasOutputRow: String,
       outputRow: String,
-      eagerCleanup: String): String = {
+      eagerCleanup: String,
+      cleanedFlag: String): String = {
     s"""
+       |if($cleanedFlag) {
+       |  return;
+       |}
+       |
        |while ($findNextJoinRows) {
        |  $beforeLoop
        |  boolean $hasOutputRow = false;
@@ -679,6 +694,7 @@ case class SortMergeJoinExec(
        |  }
        |  if (shouldStop()) return;
        |}
+       |$cleanedFlag = true;
        |$eagerCleanup
      """.stripMargin
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to