If you've already set the limit at 4GB, there might be something else going
on. We'll take a look at this in more detail, but you shouldn't expect a
solution soon. (probably 1.14.0, since 1.13.0 is already on its way out for
release).
For now, bumping up the limit as you've done, reducing the
"pla
Hi Kunal,
First of all, thanks for such a good explanation, it really helped me
understanding few things.But as you have mentioned that in case of failure
"Drillbits capped at around 1.2GB" and suggested to "increase the
memory-per-query-per-node from the current 2GB to a higher level".
Are you say
Anup
If you look at the successful run's major fragment overview, you can see
the amount of memory consumed across the 5 nodes and the average per node
(shown in the table):
++-+-+-+-++-+---
Hi Kunal,
Please find below link :-
https://drive.google.com/open?id=13NVDqSgDD-Pe6H0smAkvzqktgXURgZF4
SQL File contains platform details and log files contains success/failure logs
of query.
On Wed, Mar 14, 2018 7:51 PM, Kunal Khatua ku...@apache.org wrote:
Hi Anup
Can you share this
Hi Anup
Can you share this as a file ? There seems to be some truncation of the
contents.
Share it using some online service like Google Drive or Dropbox, since the
mailing list might not allow for attachments.
Thanks
~ Kunal
On Tue, Mar 13, 2018 at 11:44 PM, Anup Tiwari
wrote:
> JSON Profil
JSON Profile when Succeeded :-
{"id":{"part1":2690693429455769721,"part2":6509382378722762087},"type":1,"start":1521007764471,"end":1521007906770,"query":"create
table a_games_log_visit_utm as\nselect\ndistinct\nglv.sessionid,\ncase when
(UFG('utms=', glv.url, '&') <> 'null') then UFG('utms=', gl
Hi Kunal,
Please find below cluster/platform details :-
Number of Nodes : 5
RAM/Node : 32GBCore/Node : 8DRILL_MAX_DIRECT_MEMORY="20G"DRILL_HEAP="8G"DRILL
VERSION = 1.12.0HADOOP VERSION = 2.7.3ZOOKEEPER VERSION = 3.4.8(Installed in
Distributed Mode on 3 nodes)planner.memory.max_query_memory_per_node
ll end up consuming unbounded
> memory.
>
>
> Thanks,
> Sorabh
>
>
> From: Anup Tiwari
> Sent: Monday, March 12, 2018 6:45:12 AM
> To: user@drill.apache.org
> Subject: Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for
>
Sent: Monday, March 12, 2018 6:45:12 AM
To: user@drill.apache.org
Subject: Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for internal
partitioning and fallback mechanism for HashAgg to use unbounded memory is
disabled
Hi Kunal,
I have executed below command and query got executed in
Hi Kunal,
I have executed below command and query got executed in 38.763 sec.
alter session set `drill.exec.hashagg.fallback.enabled`=TRUE;
Can you tell me what is the problems in setting this variable? Since you have
mentioned it will risk instability.
On Mon, Mar 12, 2018 6:27 PM, Anup T
Hi Kunal,
I am still getting this error for some other query and i have increased
planner.memory.max_query_memory_per_node variable from 2 GB to 10 GB on session
level but still getting this issue.
Can you tell me how this was getting handled in Earlier Drill Versions(<1.11.0)?
On Mon, Mar
Hi Kunal,
Thanks for info and i went with option 1 and increased
planner.memory.max_query_memory_per_node and now queries are working fine. Will
let you in case of any issues.
On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua ku...@apache.org wrote:
Here is the background of your issue:
https:/
Here is the background of your issue:
https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/#spill-to-disk
HashAgg introduced a Spill-to-disk capability in 1.11.0 that allows for
Drill to run a query's HashAgg in a memory constrained environment. The
memory required
Hi All,
I recently upgraded from 1.10.0 to 1.12.0 and in my one of query I got below
error :-
INFO o.a.d.e.p.i.aggregate.HashAggregator - User Error Occurred: Not enough
memory for internal partitioning and fallback mechanism for HashAgg to use
unbounded memory is disabled. Either enable fallback
14 matches
Mail list logo