Github user paul-rogers commented on a diff in the pull request:

    https://github.com/apache/drill/pull/938#discussion_r137939459
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
    @@ -1178,20 +1273,38 @@ private void checkGroupAndAggrValues(int 
incomingRowIdx) {
         hashCode >>>= bitsInMask;
         HashTable.PutStatus putStatus = null;
         long allocatedBeforeHTput = allocator.getAllocatedMemory();
    -
         // ==========================================
         // Insert the key columns into the hash table
         // ==========================================
    +    boolean noReserveMem = reserveValueBatchMemory == 0;
         try {
    +      if ( noReserveMem && canSpill ) { throw new 
RetryAfterSpillException();} // proactive spill, skip put()
    +
           putStatus = htables[currentPartition].put(incomingRowIdx, 
htIdxHolder, hashCode);
    +
    +    } catch (RetryAfterSpillException re) {
    +      if ( ! canSpill ) { throw new 
OutOfMemoryException(getOOMErrorMsg("Can not spill")); }
    --- End diff --
    
    This is the message sent to the log and user. Should we explain why we 
can't spill? And, what to do? Something like:
    
    "Incoming batch too large and no in-memory partitions to spill. Increase 
memory assigned to the Hash Agg."
    
    Replace the above wording with the actual reasons and fixes.


---

Reply via email to