[ 
https://issues.apache.org/jira/browse/HIVE-7879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14110348#comment-14110348
 ] 

yhzhtk commented on HIVE-7879:
------------------------------

Hive, will command "group by" explain to division method? or count(*)?

I don't use any division ,but why error "/ by zero" ?

> Execute the same sql first sucessful, but the second error "/ by zero"
> ----------------------------------------------------------------------
>
>                 Key: HIVE-7879
>                 URL: https://issues.apache.org/jira/browse/HIVE-7879
>             Project: Hive
>          Issue Type: Bug
>         Environment: CentOS release 6.2 (Final)
> Hadoop 2.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
> Compiled by hortonmu on 2013-10-07T06:28Z
> Compiled with protoc 2.5.0
> From source with checksum 79e53ce7994d1628b240f09af91e1af4
> hive-0.12.0
>            Reporter: yhzhtk
>
> The hive sql is:
> {quote}
>         select working_experience,count(*) 
>         from search 
>         where 
>             log_date = 20140825 and 
>             ua not rlike '^.*(HttpClient|Jakarta|scrapy|bot|spider|wget).*$' 
> and
>              working_experience in 
> ('不限','应届毕业生','1年以下','1-3年','3-5年','5-10年','10年以上','') 
>         group by working_experience
> {quote}
> *Execute the sql a short time on the same environment(data not change) twice, 
> the first is run sucessful and print the right result , but the second is 
> error:*
>     
> The error is:
> {quote}
>     Diagnostic Messages for this Task: 
>     / by zero
> {quote}
> *The first (successful), The whole info is:*
> {quote}
> > select working_experience,count(*) from search where log_date = 20140825 
> > and ua not rlike '^.*(HttpClient|Jakarta|scrapy|bot|spider|wget).*$' and 
> > working_experience in 
> > ('不限','应届毕业生','1年以下','1-3年','3-5年','5-10年','10年以上','') group by 
> > working_experience; 
> Total MapReduce jobs = 1 
> Launching Job 1 out of 1 
> Number of reduce tasks not specified. Estimated from input data size: 1 
> In order to change the average load for a reducer (in bytes): 
> set hive.exec.reducers.bytes.per.reducer=<number> 
> In order to limit the maximum number of reducers: 
> set hive.exec.reducers.max=<number> 
> In order to set a constant number of reducers: 
> set mapred.reduce.tasks=<number> 
> Starting Job = job_1404899662896_1182, Tracking URL = 
> http://rsyslog-16-3:8088/proxy/application_1404899662896_1182/ 
> Kill Command = /apps/hadoop_mapreduce/hadoop-2.2.0/bin/hadoop job -kill 
> job_1404899662896_1182 
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1 
> 2014-08-26 11:41:45,199 Stage-1 map = 0%, reduce = 0% 
> 2014-08-26 11:41:54,522 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:41:55,557 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:41:56,600 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:41:57,639 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:41:58,677 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:41:59,711 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:42:00,751 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.3 
> sec 
> 2014-08-26 11:42:02,018 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 
> 5.91 sec 
> 2014-08-26 11:42:03,055 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 
> 5.91 sec 
> 2014-08-26 11:42:04,099 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 
> 5.91 sec 
> MapReduce Total cumulative CPU time: 5 seconds 910 msec 
> Ended Job = job_1404899662896_1182 
> MapReduce Jobs Launched: 
> Job 0: Map: 1 Reduce: 1 Cumulative CPU: 5.91 sec HDFS Read: 22122871 HDFS 
> Write: 104 SUCCESS 
> Total MapReduce CPU Time Spent: 5 seconds 910 msec 
> OK 
> 50339 
> 1-3年 1949 
> 10年以上 60 
> 1年以下 360 
> 3-5年 689 
> 5-10年 328 
> 不限 1196 
> 应届毕业生 961 
> Time taken: 26.135 seconds, Fetched: 8 row(s)
> {quote}
> *The second (error), The whole info is:*
> {quote}
> > select working_experience,count(*) from search where log_date = 20140825 
> > and ua not rlike '^.*(HttpClient|Jakarta|scrapy|bot|spider|wget).*$' and 
> > working_experience in 
> > ('不限','应届毕业生','1年以下','1-3年','3-5年','5-10年','10年以上','') group by 
> > working_experience 
> > ; 
> Total MapReduce jobs = 1 
> Launching Job 1 out of 1 
> Number of reduce tasks not specified. Estimated from input data size: 1 
> In order to change the average load for a reducer (in bytes): 
> set hive.exec.reducers.bytes.per.reducer=<number> 
> In order to limit the maximum number of reducers: 
> set hive.exec.reducers.max=<number> 
> In order to set a constant number of reducers: 
> set mapred.reduce.tasks=<number> 
> Starting Job = job_1404899662896_1183, Tracking URL = 
> http://rsyslog-16-3:8088/proxy/application_1404899662896_1183/ 
> Kill Command = /apps/hadoop_mapreduce/hadoop-2.2.0/bin/hadoop job -kill 
> job_1404899662896_1183 
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1 
> 2014-08-26 11:42:20,923 Stage-1 map = 0%, reduce = 0% 
> 2014-08-26 11:42:38,491 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:39,525 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:40,563 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:41,596 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:42,644 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:43,677 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:44,712 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:45,753 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:46,786 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:47,817 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:48,859 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:49,896 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:50,929 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:51,962 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:53,000 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.26 
> sec 
> 2014-08-26 11:42:54,037 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 
> 4.26 sec 
> 2014-08-26 11:42:55,073 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 
> 4.26 sec 
> MapReduce Total cumulative CPU time: 4 seconds 260 msec 
> Ended Job = job_1404899662896_1183 with errors 
> Error during job, obtaining debugging information... 
> Examining task ID: task_1404899662896_1183_m_000000 (and more) from job 
> job_1404899662896_1183 
> Task with the most failures(4): 
> ----- 
> Task ID: 
> task_1404899662896_1183_r_000000 
> URL: 
> http://rsyslog-16-3:8088/taskdetails.jsp?jobid=job_1404899662896_1183&tipid=task_1404899662896_1183_r_000000
>  
> ----- 
> Diagnostic Messages for this Task: 
> / by zero 
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask 
> MapReduce Jobs Launched: 
> Job 0: Map: 1 Reduce: 1 Cumulative CPU: 4.26 sec HDFS Read: 22122871 HDFS 
> Write: 0 FAIL 
> Total MapReduce CPU Time Spent: 4 seconds 260 msec
> {quote}
> Is this is a bug? How can I resolved it.
> Thanks very much!



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to