[ 
https://issues.apache.org/jira/browse/HIVE-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16490475#comment-16490475
 ] 

zhiwen wang commented on HIVE-15097:
------------------------------------

turn off map join will make your query normal!!!

 

hive.auto.convert.join=false

hive.ignore.mapjoin.hint=false

> Error: java.lang.RuntimeException: Error in configuring object  Caused by: 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-15097
>                 URL: https://issues.apache.org/jira/browse/HIVE-15097
>             Project: Hive
>          Issue Type: Bug
>         Environment: hive 1.2.1
>            Reporter: chen cong
>            Priority: Major
>
> hive> CREATE TABLE user_tag_pre AS 
>     > SELECT A.userid , A.tag , A.single_tag_sumcount/B.sum_tag_sumcount_avg 
> as pre
>     > FROM  user_tag_sumcount A , user_tag_avg B
>     > WHERE A.userid = B.userid;
> Query ID = ubuntu_20161101091124_753d511e-c7b8-4d31-bf97-cf0c04203ddc
> Total jobs = 1
> Execution log at: 
> /tmp/ubuntu/ubuntu_20161101091124_753d511e-c7b8-4d31-bf97-cf0c04203ddc.log
> 2016-11-01 09:11:28     Starting to launch local task to process map join;    
>   maximum memory = 477626368
> 2016-11-01 09:11:30     Processing rows:        200000  Hashtable size: 
> 199999  Memory usage:   106513696       percentage:     0.223
> 2016-11-01 09:11:31     Processing rows:        300000  Hashtable size: 
> 299999  Memory usage:   92688064        percentage:     0.194
> 2016-11-01 09:11:31     Processing rows:        400000  Hashtable size: 
> 399999  Memory usage:   152547912       percentage:     0.319
> 2016-11-01 09:11:31     Processing rows:        500000  Hashtable size: 
> 499999  Memory usage:   149501320       percentage:     0.313
> 2016-11-01 09:11:32     Processing rows:        600000  Hashtable size: 
> 599999  Memory usage:   170749448       percentage:     0.357
> 2016-11-01 09:11:32     Dump the side-table for tag: 1 with group count: 
> 675226 into file: 
> file:/tmp/ubuntu/9b365431-aea2-41fa-bebe-602435a22612/hive_2016-11-01_09-11-2 
>                             
> 4_333_5456525982690482779-1/-local-10003/HashTable-Stage-4/MapJoin-mapfile01--.hashtable
> 2016-11-01 09:11:33     Uploaded 1 File to: 
> file:/tmp/ubuntu/9b365431-aea2-41fa-bebe-602435a22612/hive_2016-11-01_09-11-24_333_5456525982690482779-1/-local-10003/HashTa
>                              ble-Stage-4/MapJoin-mapfile01--.hashtable 
> (22302233 bytes)
> 2016-11-01 09:11:33     End of local task; Time Taken: 5.334 sec.
> Execution completed successfully
> MapredLocal task succeeded
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_1477989556419_0006, Tracking URL = 
> http://master:8088/proxy/application_1477989556419_0006/
> Kill Command = /home/ubuntu/cloud/hadoop-2.7.2/bin/hadoop job  -kill 
> job_1477989556419_0006
> Hadoop job information for Stage-4: number of mappers: 1; number of reducers: > 0
> 2016-11-01 09:11:43,429 Stage-4 map = 0%,  reduce = 0%
> 2016-11-01 09:12:44,257 Stage-4 map = 0%,  reduce = 0%
> 2016-11-01 09:13:44,706 Stage-4 map = 0%,  reduce = 0%
> 2016-11-01 09:14:32,819 Stage-4 map = 100%,  reduce = 0%
> Ended Job = job_1477989556419_0006 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1477989556419_0006_m_000000 (and more) from job 
> job_1477989556419_0006
> Task with the most failures(4):
> -----
> Task ID:
>   task_1477989556419_0006_m_000000
> URL:
>   
> http://master:8088/taskdetails.jsp?jobid=job_1477989556419_0006&tipid=task_1477989556419_0006_m_000000
> -----
> Diagnostic Messages for this Task:
> Error: java.lang.RuntimeException: Error in configuring object
>         at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
>         at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>         ... 9 more
> Caused by: java.lang.RuntimeException: Error in configuring object
>         at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
>         at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>         at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
>         ... 14 more
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>         ... 17 more
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:82)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.HashTableLoader.load(HashTableLoader.java:98)
>         at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:288)
>         at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:173)
>         at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:169)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:55)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieveAsync(ObjectCache.java:63)
>         at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.initializeOp(MapJoinOperator.java:166)
>         at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362)
>         at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:481)
>         at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:438)
>         at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:131)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
>         at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>         at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>         at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
>         at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched:
> Stage-Stage-4: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> How can I solve it ? Need  help!!!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to