[ 
https://issues.apache.org/jira/browse/DRILL-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384349#comment-14384349
 ] 

Jason Altekruse commented on DRILL-1944:
----------------------------------------

I'm having trouble re-producing this, I'm not going to close it as invalid but 
any additional information you can provide about the environment might help 
reproduce it. I've only tried this from an embedded drillbit within a unit 
test. I will try in a clustered environment to see what happens, but I don't 
see why it should effect anything, we always read a single file from a single 
thread.

> JsonReader fails to read arrays of size 2000
> --------------------------------------------
>
>                 Key: DRILL-1944
>                 URL: https://issues.apache.org/jira/browse/DRILL-1944
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - JSON
>            Reporter: Rahul Challapalli
>            Assignee: Jason Altekruse
>            Priority: Blocker
>         Attachments: error.log, scalar-array-2000.json
>
>
> git.commit.id.abbrev=b491cdb
> I tried a select * on a json file which contained a single array of strings. 
> Drill fails to read the file if the array has 2000 elements. However it works 
> for an array size of 1500. So I am not sure if it is a known limitation or a 
> bug.
> Query :
> {code}
> jdbc:drill:schema=dfs.drillTestDir> select * from 
> `data-shapes/wide-records/single/wide-record1.json`;
> Query failed: Query stopped., Record was too large to copy into vector. [ 
> 5b320c2f-fff3-4de5-8095-8043c30510fd on qa-node191.qa.lab:31010 ]
> {code}
> I attached the data file and the error log. Let me know if you have any 
> questions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to