Github user revans2 commented on a diff in the pull request:

    https://github.com/apache/storm/pull/1053#discussion_r52767645
  
    --- Diff: conf/defaults.yaml ---
    @@ -263,7 +263,7 @@ topology.state.checkpoint.interval.ms: 1000
     # topology priority describing the importance of the topology in 
decreasing importance starting from 0 (i.e. 0 is the highest priority and the 
priority importance decreases as the priority number increases).
     # Recommended range of 0-29 but no hard limit set.
     topology.priority: 29
    -topology.component.resources.onheap.memory.mb: 128.0
    +topology.component.resources.onheap.memory.mb: 256.0
    --- End diff --
    
    First of all RAS should be keeping scheduling ackers just like any other 
bolt.  The system will have some overhead, and so will the JVM itself. If we 
assume that the JVM will only ever use the amount of memory specified in the 
HEAP we are going to be in trouble when someone starts to use their entire heap.
    
    I would say initially that we want a two configs that are for worker system 
overhead, both on heap and off heap.  Then we reserve that many MBs * 
number_of_slots from being used for scheduling.  Increasing the default does 
not solve this problem, except in the case where we are not using all of the 
heap. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to