Github user paul-rogers commented on a diff in the pull request:

    https://github.com/apache/drill/pull/938#discussion_r137939319
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
    @@ -646,6 +687,46 @@ public AggOutcome doWork() {
       }
     
       /**
    +   *   Use reserved values memory (if available) to try and preemp an OOM
    +   */
    +  private void useReservedValuesMemory() {
    +    // try to preempt an OOM by using the reserved memory
    +    long reservedMemory = reserveValueBatchMemory;
    +    if ( reservedMemory > 0 ) { allocator.setLimit(allocator.getLimit() + 
reservedMemory); }
    +
    +    reserveValueBatchMemory = 0;
    +  }
    +  /**
    +   *   Use reserved outgoing output memory (if available) to try and 
preemp an OOM
    +   */
    +  private void useReservedOutgoingMemory() {
    +    // try to preempt an OOM by using the reserved memory
    +    long reservedMemory = reserveOutgoingMemory;
    +    if ( reservedMemory > 0 ) { allocator.setLimit(allocator.getLimit() + 
reservedMemory); }
    --- End diff --
    
    Why is it necessary to change the allocator limit? The allocator limit 
should be fixed: it is the amount of memory given to this operator. Shouldn't 
the code use its own, internal, limits to make decisions? That is, if allocated 
memory + some expected use > a defined internal size, then spill?


---

Reply via email to