Github user bersprockets commented on the issue:

    https://github.com/apache/spark/pull/21899
  
    > Is it possible to include the actual size of the in-memory table so far 
in the msg as well?
    
    Possibly. The state of the relation might be messy when I go to query its 
size.
    
    >Also, does catching the OOM and throwing our own mess with 
HeapDumpOnOutOfMemoryError?
    
    @squito From my tests, it seems the heap dump is taken before the exception 
is caught.
    
    <pre>
    java.lang.OutOfMemoryError: Java heap space
    Dumping heap to java_pid70644.hprof ...
    Heap dump file created [842225516 bytes in 2.412 secs]
    java.lang.OutOfMemoryError: Not enough memory to build and broadcast the 
table to all worker nodes. As a workaround, you can either disable broadcast by 
setting spark.sql.autoBroadcastJoinThreshold to -1 or increase the spark driver 
memory by setting spark.driver.memory to a higher value
      at 
org.apache.spark.sql.execution.joins.LongToUnsafeRowMap.grow(HashedRelation.scala:628)
    </pre>


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to