Github user patrickbrownsync commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22883#discussion_r229539016
  
    --- Diff: 
core/src/main/scala/org/apache/spark/status/AppStatusListener.scala ---
    @@ -1105,6 +1095,15 @@ private[spark] class AppStatusListener(
     
           cleanupCachedQuantiles(key)
         }
    +
    +    // Delete tasks for all stages in one pass, as deleting them for each 
stage individually is slow
    --- End diff --
    
    Sure
    
    Take a look at the implementation of InMemoryView at 
spark/common/kvstore/src/main/java/org/apache/spark/util/kvstore/InMemoryStore.java
 line 179
    
    specifically the implementation of iterator on line 193, here is an excerpt:
    
    ```
    Collections.sort(sorted, (e1, e2) -> modifier * compare(e1, e2, getter));
    Stream<T> stream = sorted.stream();
    
    if (first != null) {
        stream = stream.filter(e -> modifier * compare(e, getter, first) >= 0);
    }
    
    if (last != null) {
        stream = stream.filter(e -> modifier * compare(e, getter, last) <= 0);
    }
    ```
    
    and the original, in loop deletion code:
    
    ```
    val tasks = kvstore.view(classOf[TaskDataWrapper])  
            .index("stage")     
            .first(key) 
            .last(key)  
            .asScala    
          
    tasks.foreach { t =>        
        kvstore.delete(t.getClass(), t.taskId)  
    }
    ```
    
    So you can see, if we do this each loop we actually sort the whole 
collection of TaskDataWrapper which are currently in the store, then go through 
and check each item based on the key set (the stage). Assuming we have a large 
number of stages and tasks this is an O(n^2) operation, which is what happens 
on my production application and the repro code.
    
    If we do this in one pass for all stages, we only sort and iterate the list 
of tasks one time.
    
    This same pattern happens fairly frequently using the KVStoreView interface 
and InMemoryView implementation. Since I am new to contributing to Spark I did 
not undertake a massive refactor, but I would suggest that this interface and 
implementation should be looked at and re-designed with efficiency in mind. The 
current implementation favors flexibility in terms of how the dataset is sorted 
and filtered, but enforcing a single sort order via something like a SortedSet 
would hopefully make it clear when the operation being performed was 
efficiently searching inside the collection, and when you were using an 
inefficient access pattern.
    
    I hope that explains the reasoning, if you have any more questions let me 
know.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to