Github user jinxing64 commented on the issue:

    https://github.com/apache/spark/pull/21424
  
    @cloud-fan 
    > I also found that we may throw OOM
    
    My previous understanding is that Spark throw `SparkOutOfMemoryError` when 
expect there's no memory -- such expectation is from Spark scope of memory 
management. So that it's safe to catch `SparkOutOfMemoryError` and mark the 
corresponding task as failed rather than the executor.
    
    If `OutOfMemoryError` is thrown from JVM, e.g. OOM when `new 
Array[bigSize]`, is it ok to catch it and continue running the executor as if 
nothing happened ? If it's ok, does it mean that an executor should never exit 
when `OutOfMemoryError`?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to