[ 
https://issues.apache.org/jira/browse/HIVE-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14029325#comment-14029325
 ] 

Edward Capriolo commented on HIVE-5857:
---------------------------------------

{code}
 } catch (FileNotFoundException fnf) {
>       // happens. e.g.: no reduce work.
>       LOG.debug("No plan file found: "+path);
>       return null;
>     } ...
{code}

Can we remove this code? This bothers me. It is not self documenting all. Can 
we use if statements to determine when the file should be there and when it 
should not. 

Something like:
if (job.hasNoReduceWork()){
  retur null;
} else {
throw RuntimeException("work should be found but was not" + expectedPathToFile);

> Reduce tasks do not work in uber mode in YARN
> ---------------------------------------------
>
>                 Key: HIVE-5857
>                 URL: https://issues.apache.org/jira/browse/HIVE-5857
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>    Affects Versions: 0.12.0, 0.13.0, 0.13.1
>            Reporter: Adam Kawa
>            Assignee: Adam Kawa
>            Priority: Critical
>              Labels: plan, uber-jar, uberization, yarn
>             Fix For: 0.13.0
>
>         Attachments: HIVE-5857.1.patch.txt, HIVE-5857.2.patch, 
> HIVE-5857.3.patch
>
>
> A Hive query fails when it tries to run a reduce task in uber mode in YARN.
> The NullPointerException is thrown in the ExecReducer.configure method, 
> because the plan file (reduce.xml) for a reduce task is not found.
> The Utilities.getBaseWork method is expected to return BaseWork object, but 
> it returns NULL due to FileNotFoundException. 
> {code}
> // org.apache.hadoop.hive.ql.exec.Utilities
> public static BaseWork getBaseWork(Configuration conf, String name) {
>   ...
>     try {
>     ...
>       if (gWork == null) {
>         Path localPath;
>         if (ShimLoader.getHadoopShims().isLocalMode(conf)) {
>           localPath = path;
>         } else {
>           localPath = new Path(name);
>         }
>         InputStream in = new FileInputStream(localPath.toUri().getPath());
>         BaseWork ret = deserializePlan(in);
>         ....
>       }
>       return gWork;
>     } catch (FileNotFoundException fnf) {
>       // happens. e.g.: no reduce work.
>       LOG.debug("No plan file found: "+path);
>       return null;
>     } ...
> }
> {code}
> It happens because, the ShimLoader.getHadoopShims().isLocalMode(conf)) method 
> returns true, because immediately before running a reduce task, 
> org.apache.hadoop.mapred.LocalContainerLauncher changes its configuration to 
> local mode ("mapreduce.framework.name" is changed from" "yarn" to "local"). 
> On the other hand map tasks run successfully, because its configuration is 
> not changed and still remains "yarn".
> {code}
> // org.apache.hadoop.mapred.LocalContainerLauncher
> private void runSubtask(..) {
>   ...
>   conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.LOCAL_FRAMEWORK_NAME);
>   conf.set(MRConfig.MASTER_ADDRESS, "local");  // bypass shuffle
>   ReduceTask reduce = (ReduceTask)task;
>   reduce.setConf(conf);          
>   reduce.run(conf, umbilical);
> }
> {code}
> A super quick fix could just an additional if-branch, where we check if we 
> run a reduce task in uber mode, and then look for a plan file in a different 
> location.
> *Java stacktrace*
> {code}
> 2013-11-20 00:50:56,862 INFO [uber-SubtaskRunner] 
> org.apache.hadoop.hive.ql.exec.Utilities: No plan file found: 
> hdfs://namenode.c.lon.spotify.net:54310/var/tmp/kawaa/hive_2013-11-20_00-50-43_888_3938384086824086680-2/-mr-10003/e3caacf6-15d6-4987-b186-d2906791b5b0/reduce.xml
> 2013-11-20 00:50:56,862 WARN [uber-SubtaskRunner] 
> org.apache.hadoop.mapred.LocalContainerLauncher: Exception running local 
> (uberized) 'child' : java.lang.RuntimeException: Error in configuring object
>       at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>       at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
>       at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>       at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:427)
>       at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>       at 
> org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:340)
>       at 
> org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:225)
>       at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
>       ... 7 more
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:116)
>       ... 12 more
> 2013-11-20 00:50:56,862 INFO [uber-SubtaskRunner] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from 
> attempt_1384392632998_34791_r_000000_0
> 2013-11-20 00:50:56,862 INFO [uber-SubtaskRunner] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
> attempt_1384392632998_34791_r_000000_0 is : 0.0
> 2013-11-20 00:50:56,862 INFO [uber-SubtaskRunner] 
> org.apache.hadoop.mapred.Task: Runnning cleanup for the task
> 2013-11-20 00:50:56,863 INFO [uber-SubtaskRunner] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from 
> attempt_1384392632998_34791_r_000000_0: java.lang.RuntimeException: Error in 
> configuring object
>       at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>       at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
>       at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>       at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:427)
>       at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>       at 
> org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:340)
>       at 
> org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:225)
>       at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
>       ... 7 more
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:116)
>       ... 12 more
> 2013-11-20 00:50:56,863 INFO [uber-SubtaskRunner] 
> org.apache.hadoop.mapred.LocalContainerLauncher: Processing the event 
> EventType: CONTAINER_REMOTE_CLEANUP for container 
> container_1384392632998_34791_01_000001 taskAttempt 
> attempt_1384392632998_34791_m_000000_0
> 2013-11-20 00:50:56,863 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1384392632998_34791_r_000000_0: 
> java.lang.RuntimeException: Error in configuring object
>       at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
>       at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
>       at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>       at 
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:427)
>       at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>       at 
> org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:340)
>       at 
> org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:225)
>       at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
>       ... 7 more
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:116)
>       ... 12 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to