[ 
https://issues.apache.org/jira/browse/DRILL-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014790#comment-16014790
 ] 

ASF GitHub Bot commented on DRILL-5516:
---------------------------------------

Github user paul-rogers commented on the issue:

    https://github.com/apache/drill/pull/839
  
    General suggestion, perhaps change the title to more clearly describe the 
fix. Maybe "Limit memory usage for Hive reader" or some such. I originally read 
"use max allowable memory" as perhaps meaning to use the full 10 GB that the 
allocator gives to each operator...


> Use max allowed allocated memory when defining batch size for hbase record 
> reader
> ---------------------------------------------------------------------------------
>
>                 Key: DRILL-5516
>                 URL: https://issues.apache.org/jira/browse/DRILL-5516
>             Project: Apache Drill
>          Issue Type: Improvement
>          Components: Storage - HBase
>    Affects Versions: 1.10.0
>            Reporter: Arina Ielchiieva
>            Assignee: Arina Ielchiieva
>
> If early limit 0 optimization is set to true (alter session set 
> `planner.enable_limit0_optimization` = true), when executing limit 0 queries 
> Drill will return data type from available metadata if possible.
> When Drill can not determine data types from metadata (or if early limit 0 
> optimization is set to false), Drill will read first batch of data and 
> determine schema.
> Hbase reader determines max batch size using magic number (4000) which can 
> lead to OOM when row size is large. The overall vector/batch size issue will 
> be reconsidered in future releases.This is temporary fix to avoid OOM.
> Instead of using rows number, we will use max allowed allocated memory which 
> will default to 64 mb. If first row in batch is larger than allowed default, 
> it will be written in batch but batch will contain only this row.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to