Github user eyalfa commented on the issue:

    https://github.com/apache/spark/pull/18855
  
    @cloud-fan, any idea how much memory is allocated for running sbt? 
currently one of my newly introduced tests fails on OOM during kryo 
serialization...
    
    it's actually a bit weird as this is the test that stores it in memory 
deserialized objects, notice that the test doesn't really generate a true 2GB 
large dataset as the iterator returns the same array on every `next()` call, so 
basically the underlying buffer(actually buffers as it used chunked byte 
buffer) used by kryo is the first materialization of such a large piece of data.
    sbt execution of this suite seems to stop on the first error so I can't 
tell if the 'memory deserialized' version of this test would have passed.
    
    please advice, is it possible to increase sbt's process heap size? or 
should I remove/comment/ignore these tests?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to