Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21424#discussion_r190577287
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/BroadcastExchangeExec.scala
 ---
    @@ -115,9 +116,9 @@ case class BroadcastExchangeExec(
               // SPARK-24294: To bypass scala bug: 
https://github.com/scala/bug/issues/9554, we throw
               // SparkFatalException, which is a subclass of Exception. 
ThreadUtils.awaitResult
               // will catch this exception and re-throw the wrapped fatal 
throwable.
    -          case oe: OutOfMemoryError =>
    +          case oe: SparkOutOfMemoryError =>
                 throw new SparkFatalException(
    -              new OutOfMemoryError(s"Not enough memory to build and 
broadcast the table to " +
    +              new SparkOutOfMemoryError(s"Not enough memory to build and 
broadcast the table to " +
    --- End diff --
    
    since we fully control the creation of `SparkOutOfMemoryError`, can we move 
the error message to where we throw `SparkOutOfMemoryError` when building hash 
relation?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to