Ashish Pancholi created DRILL-6813:
--------------------------------------
Summary: Inner-Join between two tar.gz files throwing error- One
or more nodes ran out of memory while executing the query
Key: DRILL-6813
URL: https://issues.apache.org/jira/browse/DRILL-6813
Project: Apache Drill
Issue Type: Bug
Affects Versions: 1.14.0
Environment: [^TBL_MOREDATA1-20180924181406.tar.gz]
Reporter: Ashish Pancholi
I am using `Apache Drill` version `1.14` on `windows` system having 8 GB RAM
and running Drill using the command:
{code:java}
sqlline.bat -u "jdbc:drill:zk=local"{code}
I am trying to `execute` a `join query` on two `compressed` and `archived`
`CSV` files.
Query:
{code:java}
SELECT * FROM
dfs.`C:\Users\admin\Desktop\DRILL_FILES\csvFileParquet\TBL_MOREDATA-20180924181406.tar.gz`
AS Table0 INNER JOIN
dfs.`C:\Users\admin\Desktop\DRILL_FILES\csvFileParquet\TBL_MOREDATA1-20180924181406.tar.gz`
AS Table1 ON Table0.columns[0]=Table1.columns[0]{code}
But an out of memory, error occurred:
{code:java}
org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One or
more nodes ran out of memory while executing the query. Unable to allocate
buffer of size 131072 (rounded from 86104) due to memory limit (630194176).
Current allocation: 630108434 Fragment 0:0 [Error Id:
585c0644-5fd5-446e-b9b3-d48e0771eb2a on DESKTOP-SM3E3KM:31010]{code}
To resolve the issue, I tried to update `config\drill-env.sh` file but the
issue remains the same and it looks like updating the script file does not
reflect the changes because I am trying to increase the DIRECT MEMORY beyond
the system memory (RAM), every time drill starts up peacefully. Not even
complaining that you have exceeded the memory, therefore, it looks like the
changes are not reflecting.
{code:java}
export DRILLBIT_MAX_PROC_MEM=12G
export DRILL_HEAP=2G
export DRILL_MAX_DIRECT_MEMORY=10G{code}
whereas my system's main memory is only 8 GB.
Please help me to resolve the out of memory error. I had even run the below
queries, in order to follow the troubleshooting instructions but the issue
remains the same.
{code:java}
alter session set `planner.enable_hashagg` = false;
alter session set `planner.enable_hashjoin` = false;
alter session set planner.width.max_per_node=3;
alter system set planner.width.max_per_query = 100;{code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)