[ 
https://issues.apache.org/jira/browse/ARROW-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300939#comment-16300939
 ] 

ASF GitHub Bot commented on ARROW-1943:
---------------------------------------

jacques-n commented on issue #1439: ARROW-1943: [JAVA] handle 
setInitialCapacity for deeply nested lists
URL: https://github.com/apache/arrow/pull/1439#issuecomment-353517425
 
 
   I'm +1 on this approach.  It may not be perfect but it is definitely far 
better than the old approach.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Handle setInitialCapacity() for deeply nested lists of lists
> ------------------------------------------------------------
>
>                 Key: ARROW-1943
>                 URL: https://issues.apache.org/jira/browse/ARROW-1943
>             Project: Apache Arrow
>          Issue Type: Bug
>            Reporter: Siddharth Teotia
>            Assignee: Siddharth Teotia
>              Labels: pull-request-available
>
> The current implementation of setInitialCapacity() uses a factor of 5 for 
> every level we go into list:
> So if the schema is LIST (LIST (LIST (LIST (LIST (LIST (LIST (BIGINT)))))) 
> and we start with an initial capacity of 128, we end up throwing 
> OversizedAllocationException from the BigIntVector because at every level we 
> increased the capacity by 5 and by the time we reached inner scalar that 
> actually stores the data, we were well over max size limit per vector (1MB).
> We saw this problem in Dremio when we failed to read deeply nested JSON data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to