[ 
https://issues.apache.org/jira/browse/DRILL-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16012676#comment-16012676
 ] 

ASF GitHub Bot commented on DRILL-5516:
---------------------------------------

Github user paul-rogers commented on the issue:

    https://github.com/apache/drill/pull/839
  
    The right approach is not to simply allow HBase to use more memory. The 
right approach is to limit memory.
    
    Fortunately, another project is underway to do just that. Let's 
collaborate. In the next week or so I'll do a PR for the framework to limit 
batch sizes in readers, along with an implementation for the "compliant" text 
readers.
    
    Maybe you can use that framework to retrofit the HBase reader to also limit 
it's batch size. Basically, we limit the length of the longest vector to 16 MB.
    
    The present patch, using unlimited memory, has all kinds of other problems 
-- the very problems we are trying to solve, so it is not helpful to move 
forward in one area, backward in another.


> Use max allowed allocated memory when defining batch size for hbase record 
> reader
> ---------------------------------------------------------------------------------
>
>                 Key: DRILL-5516
>                 URL: https://issues.apache.org/jira/browse/DRILL-5516
>             Project: Apache Drill
>          Issue Type: Improvement
>          Components: Storage - HBase
>    Affects Versions: 1.10.0
>            Reporter: Arina Ielchiieva
>            Assignee: Arina Ielchiieva
>
> If early limit 0 optimization is set to true (alter session set 
> `planner.enable_limit0_optimization` = true), when executing limit 0 queries 
> Drill will return data type from available metadata if possible.
> When Drill can not determine data types from metadata (or if early limit 0 
> optimization is set to false), Drill will read first batch of data and 
> determine schema.
> Hbase reader determines max batch size using magic number (4000) which can 
> lead to OOM when row size is large. The overall vector/batch size issue will 
> be reconsidered in future releases.This is temporary fix to avoid OOM.
> Instead of using rows number, we will use max allowed allocated memory which 
> will default to 64 mb. If first row in batch is larger than allowed default, 
> it will be written in batch but batch will contain only this row.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to