[ 
https://issues.apache.org/jira/browse/DRILL-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16460238#comment-16460238
 ] 

Dechang Gu commented on DRILL-6374:
-----------------------------------

For the OOM, here is the stack:
{code}
2018-05-01 13:40:42,457 [25172fdc-6af9-e530-dbcd-6ac47cf16b00:frag:4:41] INFO  
o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes 
ran out of memory while executing the query. (AGGR OOM at First Phase. 
Partitions: 16. Estimated batch size: 26673152. values size: 1048576. Output 
alloc size: 1048576. Planned batches: 2 Memory limit: 856896307 so far 
allocated: 377618432. )
org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
nodes ran out of memory while executing the query.

AGGR OOM at First Phase. Partitions: 16. Estimated batch size: 26673152. values 
size: 1048576. Output alloc size: 1048576. Planned batches: 2 Memory limit: 
856896307 so far allocated: 377618432.

[Error Id: 8eb64127-e22f-4aea-aff9-db5be8ff3574 ]
        at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
 ~[drill-common-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:304)
 [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_112]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_112]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: org.apache.drill.exec.exception.OutOfMemoryException: AGGR OOM at 
First Phase. Partitions: 16. Estimated batch size: 26673152. values size: 
1048576. Output alloc size: 1048576. Planned batches: 2 Memory limit: 856896307 
so far allocated: 377618432.
        at 
org.apache.drill.exec.test.generated.HashAggregatorGen3940.spillIfNeeded(HashAggTemplate.java:1419)
 ~[na:na]
        at 
org.apache.drill.exec.test.generated.HashAggregatorGen3940.doSpill(HashAggTemplate.java:1381)
 ~[na:na]
        at 
org.apache.drill.exec.test.generated.HashAggregatorGen3940.checkGroupAndAggrValues(HashAggTemplate.java:1304)
 ~[na:na]
        at 
org.apache.drill.exec.test.generated.HashAggregatorGen3940.doWork(HashAggTemplate.java:592)
 ~[na:na]
        at 
org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:176)
 ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
 ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:105) 
~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderRootExec.innerNext(PartitionSenderRootExec.java:152)
 ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:95) 
~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:292)
 ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:279)
 ~[drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_112]
        at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_112]
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
 ~[hadoop-common-2.7.0-mapr-1707.jar:na]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:279)
 [drill-java-exec-1.14.0-SNAPSHOT.jar:1.14.0-SNAPSHOT]
        ... 4 common frames omitted{code}


> TPCH Queries regressed and OOM when run concurrency test
> --------------------------------------------------------
>
>                 Key: DRILL-6374
>                 URL: https://issues.apache.org/jira/browse/DRILL-6374
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Functions - Drill
>    Affects Versions: 1.14.0
>         Environment: RHEL 7
>            Reporter: Dechang Gu
>            Assignee: Vitalii Diravka
>            Priority: Critical
>             Fix For: 1.14.0
>
>         Attachments: TPCH_09_2_id_2517381b-1a61-3db5-40c3-4463bd421365.json, 
> TPCH_09_2_id_2517497b-d4da-dab6-6124-abde5804a25f.json
>
>
> Run TPCH regression test on Apache Drill 1.14.0 master commit 
> 6fcaf4268eddcb09010b5d9c5dfb3b3be5c3f903 (DRILL-6173), most of the queries 
> regressed.
> In particular, TPC-H Query 9 takes about 4x time (36 sec vs 8.6 sec), 
> comparing to that when run against the parent commit 
> (9173308710c3decf8ff745493ad3e85ccdaf7c37).
> Further in the concurrency test for the commit, with 48 clients each running 
> 16 TPCH queries (so total 768 queries are executed) with 
> planner.width.max_per_node=5, some queries hit OOM and caused 266 queries 
> failed, while for the parent commit all the 768 queries completed 
> successfully.
>  
> Profiles for TPCH_09 in the regression tests are uploaded:
>  * The failing commit  file name: 
> [^TPCH_09_2_id_2517381b-1a61-3db5-40c3-4463bd421365.json],
>  * The parent commit file name: 
> [^TPCH_09_2_id_2517497b-d4da-dab6-6124-abde5804a25f.json] ).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to