[ https://issues.apache.org/jira/browse/MAPREDUCE-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308509#comment-14308509 ]
Mohammad Kamrul Islam commented on MAPREDUCE-6240: -------------------------------------------------- [~jira.shegalov] if this message "Please check your configuration for mapreduce.framework.name and the correspond server addresses." is shown, please include what is the current values of those properties. It will help users to find out if their configurations is effective. > Hadoop client displays confusing error message > ---------------------------------------------- > > Key: MAPREDUCE-6240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6240 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: client > Reporter: Mohammad Kamrul Islam > Assignee: Mohammad Kamrul Islam > Attachments: MAPREDUCE-6240-gera.001.patch, > MAPREDUCE-6240-gera.001.patch, MAPREDUCE-6240-gera.002.patch, > MAPREDUCE-6240.1.patch > > > Hadoop client often throws exception with "java.io.IOException: Cannot > initialize Cluster. Please check your configuration for > mapreduce.framework.name and the correspond server addresses". > This is a misleading and generic message for any cluster initialization > problem. It takes a lot of debugging hours to identify the root cause. The > correct error message could resolve this problem quickly. > In one such instance, Oozie log showed the following exception while the > root cause was CNF that Hadoop client didn't return in the exception. > {noformat} > JA009: Cannot initialize Cluster. Please check your configuration for > mapreduce.framework.name and the correspond server addresses. > at > org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412) > at > org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:392) > at > org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:979) > at > org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1134) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228) > at > org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) > at org.apache.oozie.command.XCommand.call(XCommand.java:281) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:323) > at > org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:252) > at > org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.io.IOException: Cannot initialize Cluster. Please check your > configuration for mapreduce.framework.name and the correspond server > addresses. > at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120) > at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82) > at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75) > at org.apache.hadoop.mapred.JobClient.init(JobClient.java:470) > at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:449) > at > org.apache.oozie.service.HadoopAccessorService$1.run(HadoopAccessorService.java:372) > at > org.apache.oozie.service.HadoopAccessorService$1.run(HadoopAccessorService.java:370) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > at > org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:379) > at > org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1185) > at > org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:927) > ... 10 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)