[
https://issues.apache.org/jira/browse/DRILL-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14197192#comment-14197192
]
Hao Zhu commented on DRILL-1632:
--------------------------------
Hi Team,
Per my tests in house, the sqlline returns nothing if row is too long.
{code}
[root@maprdemo data]# cat small.csv
1,test
[root@maprdemo data]# more giants.csv
1,testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest...
{code}
Note, the giants.csv is larger than 128K.
{code}
[root@maprdemo data]# ls -altr giants.csv
-rw-r--r--. 1 mapr mapr 147402 Nov 4 15:53 giants.csv
[root@maprdemo data]# ls -altr small.csv
-rw-r--r--. 1 mapr mapr 7 Nov 4 15:55 small.csv
{code}
The result:
{code}
0: jdbc:drill:> select * from `giants.csv`;
+------------+
| columns |
+------------+
+------------+
No rows selected (0.141 seconds)
0: jdbc:drill:> select * from `small.csv`;
+------------+
| columns |
+------------+
| ["1","test"] |
+------------+
1 row selected (0.137 seconds)
{code}
Needs to know:
1. The use case ?
2. What error message ?
2. Sample data and table definition?
Thanks,
Hao
> Increase Maximum Allowed Record Size From 131072 bytes
> ------------------------------------------------------
>
> Key: DRILL-1632
> URL: https://issues.apache.org/jira/browse/DRILL-1632
> Project: Apache Drill
> Issue Type: Improvement
> Components: Client - JDBC
> Affects Versions: 0.6.0
> Reporter: MUFEED USMAN
> Priority: Blocker
> Attachments: giants.csv
>
>
> Maximum allowed record size of 131072 bytes is a little low for many use
> cases. Requesting to increase it from default value.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)