Hi Users, *my cluster(1+8) configuration*:
RAM : 32 GB each HDFS : 1.5 TB SSD CPU : 8 core each ----------------------------------------------- I am trying to query on 300GB of table but I am able to run only select query. Except select query , for all other query I am getting following exception. Total jobs = 1 Stage-1 is selected by condition resolver. Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 183 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1416831990090_0005, Tracking URL = http://master:8088/proxy/application_1416831990090_0005/ Kill Command = /root/hadoop/bin/hadoop job -kill job_1416831990090_0005 Hadoop job information for Stage-1: number of mappers: 679; number of reducers: 183 2014-11-24 19:43:01,523 Stage-1 map = 0%, reduce = 0% 2014-11-24 19:43:22,730 Stage-1 map = 53%, reduce = 0%, Cumulative CPU 625.19 sec 2014-11-24 19:43:23,778 Stage-1 map = 100%, reduce = 100% MapReduce Total cumulative CPU time: 10 minutes 25 seconds 190 msec Ended Job = job_1416831990090_0005 with errors Error during job, obtaining debugging information... Examining task ID: task_1416831990090_0005_m_000005 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000042 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000035 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000065 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000002 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000007 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000058 (and more) from job job_1416831990090_0005 Examining task ID: task_1416831990090_0005_m_000043 (and more) from job job_1416831990090_0005 Task with the most failures(4): ----- Task ID: task_1416831990090_0005_m_000005 URL: http://master:8088/taskdetails.jsp?jobid=job_1416831990090_0005&tipid=task_1416831990090_0005_m_000005 ----- Diagnostic Messages for this Task: Container launch failed for container_1416831990090_0005_01_000112 : java.lang.IllegalArgumentException: java.net.UnknownHostException: slave6 at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418) at org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:397) at org.apache.hadoop.yarn.util.ConverterUtils.convertFromYarn(ConverterUtils.java:233) at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:211) at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.<init>(ContainerManagementProtocolProxy.java:189) at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:110) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.UnknownHostException: slave6 ... 12 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Job 0: Map: 679 Reduce: 183 Cumulative CPU: 625.19 sec HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 10 minutes 25 seconds 190 mse Please help me to fix the issue. Thanks Amit