Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12395#discussion_r60556860
  
    --- Diff: 
core/src/main/scala/org/apache/spark/memory/UnifiedMemoryManager.scala ---
    @@ -187,7 +187,6 @@ object UnifiedMemoryManager {
       // This serves a function similar to `spark.memory.fraction`, but 
guarantees that we reserve
       // sufficient memory for the system even for small heaps. E.g. if we 
have a 1GB JVM, then
       // the memory used for execution and storage will be (1024 - 300) * 0.75 
= 543MB by default.
    -  private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
    --- End diff --
    
    Let me back up a second -- this minimum is really specific to the new 
memory manager. Actually, should it apply to the old memory manager? That is, 
what error were you referring to initially? I had assumed that you mean that, 
even in legacy mode, this limit is later applied but in some confusing way, but 
I'm not as sure that was the problem. If it really is a required minimum even 
when using legacy mode, we should fail faster; if it's not actually being 
required now in legacy mode, then we should fix whatever thinks it is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to