Github user sudheeshkatkam commented on a diff in the pull request:

    https://github.com/apache/drill/pull/838#discussion_r117590009
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java 
---
    @@ -173,9 +174,8 @@ public IterOutcome next() {
     
             currentReader.allocate(mutator.fieldVectorMap());
           } catch (OutOfMemoryException e) {
    -        logger.debug("Caught Out of Memory Exception", e);
             clearFieldVectorMap();
    -        return IterOutcome.OUT_OF_MEMORY;
    +        throw UserException.memoryError(e).build(logger);
    --- End diff --
    
    I am not sure if this specific line change is required, so please correct 
me if I am wrong. Thinking out loud..
    
    There are three places in ScanBatch where OutOfMemoryException is handled. 
Since OutOfMemoryException is an unchecked exception, I could not quickly find 
all the calls which trigger the exception in this method.
    
    The [first 
case](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L175)
 and [second 
case](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L215)
 are similar in that `reader.allocate(...)` fails. So although there is no 
unwind logic, seems to me, this case is correctly handled as no records have 
been read, and so there is no need to unwind. Say this triggers spilling in 
sort, then the query could complete successfully, if allocate succeeds next 
time (and so on). Am I following this logic correctly?
    
    But this does not seems to be case, as 
[TestOutOfMemoryOutcome](https://github.com/apache/drill/blob/master/exec/java-exec/src/test/java/org/apache/drill/TestOutOfMemoryOutcome.java#L65)
 triggers an OutOfMemoryException during ["next" 
allocation](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L172),
 and all tests are expected to fail.
    
    And then, there is the [third 
case](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L247),
 which is a general catch (e.g.`reader.next()` throws OutOfMemoryException). 
And as you mentioned, readers cannot unwind, so that correctly fails the 
fragment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to