[
https://issues.apache.org/jira/browse/DRILL-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14197339#comment-14197339
]
Hao Zhu commented on DRILL-1632:
--------------------------------
Another reproduce for json format and it errors out.
{code}
0: jdbc:drill:> select * from `small.json`;
+------------+------------+
| id | content |
+------------+------------+
| 1 | test |
+------------+------------+
1 row selected (0.072 seconds)
0: jdbc:drill:> select * from `giants.json`;
Query failed: Failure while running fragment. Record is too long. Max allowed
record size is 131072 bytes. [16d24629-3991-4437-8f46-8adcb270ea65]
Error: exception while executing query: Failure while trying to get next result
batch. (state=,code=0)
{code}
Sample Data:
{code}
[root@maprdemo data]# more small.json
{"id": 1,"content": "test"}
[root@maprdemo data]# more giants.json
{"id": 1,"content": "testtesttesttesttesttesttestte
{code}
> Increase Maximum Allowed Record Size From 131072 bytes
> ------------------------------------------------------
>
> Key: DRILL-1632
> URL: https://issues.apache.org/jira/browse/DRILL-1632
> Project: Apache Drill
> Issue Type: Improvement
> Components: Client - JDBC
> Affects Versions: 0.6.0
> Reporter: MUFEED USMAN
> Priority: Blocker
> Attachments: giants.csv
>
>
> Maximum allowed record size of 131072 bytes is a little low for many use
> cases. Requesting to increase it from default value.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)