[ 
https://issues.apache.org/jira/browse/HIVE-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027257#comment-14027257
 ] 

Jason Dere commented on HIVE-7210:
----------------------------------

I suspect this problem was introduced by HIVE-6888 - if one of the Driver 
instances calls CombineHiveInputFormat.getSplits() (and thus 
HiveInputFormat.getSplits()), the global Utilities.gWorkMap is cleared out, 
which not only removes the map/reduce work for that query, but for any other 
queries that are being executed by other threads. If any of these other threads 
tries to call getSplits(), gWorkMap will no longer have the map/reduce work 
cached, and the current logic doesn't look like it allows the client to be able 
to get the plan if it's not already cached.
Possible fixes might be
1) getSplits() should only remove the map/reduce work for the current query, 
rather than remove all cached work.
2) Utilities.getBaseWork() should be modified to allow the map/reduce work to 
be loaded if it is not already cached.

> NPE with "No plan file found" when running Driver instances on multiple 
> threads
> -------------------------------------------------------------------------------
>
>                 Key: HIVE-7210
>                 URL: https://issues.apache.org/jira/browse/HIVE-7210
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Jason Dere
>            Assignee: Jason Dere
>
> Informatica has a multithreaded application running multiple instances of 
> CLIDriver.  When running concurrent queries they sometimes hit the following 
> error:
> {noformat}
> 2014-05-30 10:24:59 <pool-10-thread-1> INFO: Hadoop_Native_Log :INFO 
> org.apache.hadoop.hive.ql.exec.Utilities: No plan file found: 
> hdfs://ICRHHW21NODE1:8020/tmp/hive-qamercury/hive_2014-05-30_10-24-57_346_890014621821056491-2/-mr-10002/6169987c-3263-4737-b5cb-38daab882afb/map.xml
> 2014-05-30 10:24:59 <pool-10-thread-1> INFO: Hadoop_Native_Log :INFO 
> org.apache.hadoop.mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/qamercury/.staging/job_1401360353644_0078
> 2014-05-30 10:24:59 <pool-10-thread-1> INFO: Hadoop_Native_Log :ERROR 
> org.apache.hadoop.hive.ql.exec.Task: Job Submission failed with exception 
> 'java.lang.NullPointerException(null)'
> java.lang.NullPointerException
>                 at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
>                 at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:271)
>                 at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
>                 at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
>                 at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>                 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>                 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>                 at java.security.AccessController.doPrivileged(Native Method)
>                 at javax.security.auth.Subject.doAs(Subject.java:415)
>                 at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
>                 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>                 at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
>                 at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
>                 at java.security.AccessController.doPrivileged(Native Method)
>                 at javax.security.auth.Subject.doAs(Subject.java:415)
>                 at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
>                 at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
>                 at 
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
>                 at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)
>                 at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
>                 at 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>                 at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>                 at 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1504)
>                 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1271)
>                 at 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1089)
>                 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:912)
>                 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
>                 at 
> com.informatica.platform.dtm.executor.hive.impl.AbstractHiveDriverBaseImpl.run(AbstractHiveDriverBaseImpl.java:86)
>                 at 
> com.informatica.platform.dtm.executor.hive.MHiveDriver.executeQuery(MHiveDriver.java:126)
>                 at 
> com.informatica.platform.dtm.executor.hive.task.impl.HiveTaskHandlerImpl.executeQuery(HiveTaskHandlerImpl.java:358)
>                 at 
> com.informatica.platform.dtm.executor.hive.task.impl.HiveTaskHandlerImpl.executeScript(HiveTaskHandlerImpl.java:247)
>                 at 
> com.informatica.platform.dtm.executor.hive.task.impl.HiveTaskHandlerImpl.executeMainScript(HiveTaskHandlerImpl.java:194)
>                 at 
> com.informatica.platform.ldtm.executor.common.workflow.taskhandler.impl.BaseTaskHandlerImpl.run(BaseTaskHandlerImpl.java:126)
>                 at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>                 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>                 at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>                 at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>                 at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to