Github user paul-rogers commented on a diff in the pull request:

    https://github.com/apache/drill/pull/1179#discussion_r177619465
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/flatten/FlattenRecordBatch.java
 ---
    @@ -237,7 +237,10 @@ protected IterOutcome doWork() {
     
       private void handleRemainder() {
         int remainingRecordCount = 
flattener.getFlattenField().getAccessor().getInnerValueCount() - remainderIndex;
    -    if (!doAlloc(remainingRecordCount)) {
    +
    +    // remainingRecordCount can be much higher than number of rows we will 
have in outgoing batch.
    +    // Do memory allocation only for number of rows we are going to have 
in the batch.
    +    if (!doAlloc(Math.min(remainingRecordCount, 
flattenMemoryManager.getOutputRowCount()))) {
    --- End diff --
    
    Not related to this fix at all... If we run out of memory, we should throw 
an `OutOfMemoryException` rather than trying to set flags and handle the case. 
In particular, after the batch sizing fixes, if we can't allocate memory now, 
something is very wrong and we'll never be able to. This code may be a vestige 
of the old system where operators tried to negotiate via `OUT_OF_MEMORY` 
statuses...


---

Reply via email to