Hi Kunal,
Please see below dataset I've provided this week. Hope it helps:
[ {
"type" : "quality-rules",
"reference" : {
"href" : "",
"name" : "Avoid unreferenced Tables",
"key" : "1634",
"critical" : false
},
"result" : {
"grade" : 2,
"violationRatio" : {
"t
dress this issue as we don't have a repro for
> this. Any chance you can provide a sample anonymized data set. The JSON data
> doesn't have to be meaningful, but we need to be able to reproduce it to
> ensure that we are indeed addressing the issue you faced.
>
> Thanks
ks,
Arjun
____
From: Yun Liu
Sent: Tuesday, November 7, 2017 1:46 AM
To: user@drill.apache.org
Subject: RE: Drill Capacity
Hi Arjun and Paul,
Yep those are turned and I am reading it from sqlline.log. Only max allocation
number I am reading is 10,000,000,000. Posted the logs in my Drop
you can post your logs somewhere, I'll d/l them and take a look.
- Paul
> On Nov 6, 2017, at 7:27 AM, Yun Liu wrote:
>
> Hi Paul,
>
> I am using Drill v 1.11.0 so I am only seeing sqlline.log and
> sqlline_queries.log. hopefully the same.
>
> I am following your ins
ON reader scan operator. The peak memory more-or-less reflects the batch
size. What is that number?
With those, we can tell if the settings and sizes we think we are using are, in
fact, correct.
Thanks,
- Paul
> On Nov 3, 2017, at 1:19 PM, Yun Liu wrote:
>
re 2: File has array of json object like below
[ {obj1},{obj2}..,{objn}]
Structure 3: File has json objects as below
{obj1}
{obj1}
..
{objn}
I was checking if this is the case here..
Thanks,
Arjun
________
From: Yun Liu
Sent: Saturday, November 4, 2017 2:27
Hi Arjun,
Column 4 has the most data and a bit long here. The other 3 columns has maybe a
word or 2. Thanks for your patience.
[ {
"type" : "quality-rules",
"reference" : {
"href" : "",
"name" : "Avoid unreferenced Tables",
"key" : "1634",
"critical" : false
},
"result" :
Kunal mentioned.
That is three separate possible solutions. Try them one by one or (carefully)
together.
- Paul
>> On 11/2/17, 12:31 PM, "Yun Liu" wrote:
>>
>>Hi Kunal and Andries,
>>
>>Thanks for your reply. We need json in this case because Drill
>> only supports up to 65536 columns in a csv file.
---+
i.e., in the drill-override.conf file:
sort: {
external: {
disable_managed: false
}
}
Please let us know if this change helped,
-- Boaz
On 11/2/17, 1:12 PM, "Yun Liu" wrote:
Please help me as to what further information I co
++
i.e., in the drill-override.conf file:
sort: {
external: {
disable_managed: false
}
}
Please let us know if this change helped,
-- Boaz
On 11/2/17, 1:12 PM, "Yun Liu"
mailto:y@castsoftware.com>> wrote:
P
1
spilledBatchGroups.size 0
allocated memory 42768000
allocator limit 41943040
Current setting is:
planner.memory.max_query_memory_per_node= 10GB
HEAP to 12G
Direct memory to 32G
Perm to 1024M
What is the issue here?
Thanks,
Yun
-Original Message-
From: Yun Liu [mailto:y
e embedded
Drillbit?
Also did you check the larger document doesn’t have any schema changes or
corruption?
--Andries
On 11/2/17, 12:31 PM, "Yun Liu" wrote:
Hi Kunal and Andries,
Thanks for your reply. We need json in this case because Drill only
supports up to 65536 colu
short term try to bump up planner.memory.max_query_memory_per_node in
the options and see if that works for you.
--Andries
On 11/2/17, 7:46 AM, "Yun Liu" wrote:
Hi,
I've been using Apache Drill actively and just wondering what is the
capacity of Drill? I have a jso
the large json, it returns
successfully for some of the fields. None of these errors really apply to me.
So I am trying to understand the capacity of the json files Drill supports up
to. Or if there's something else I missed.
Thanks,
Yun Liu
Solutions Delivery Consultant
321 West 44th
14 matches
Mail list logo