Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/21424
  
    That's a good point!
    
    According to the document of `SparkOutOfMemoryError`, Spark should not kill 
the executor if `SparkOutOfMemoryError` is thrown. Broadcast is special because 
it's run on driver side, so `SparkOutOfMemoryError` is same as 
`OutOfMemoryError`, we need to kill the driver anyway.
    
    That said, I think it's fine to catch `OutOfMemoryError` and enhance the 
error message here, to give users some details about why the job failed.
    
    One thing we can do for future safety: do not change the exception type 
when enhancing the error message.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to