[ 
https://issues.apache.org/jira/browse/HIVE-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139057#comment-16139057
 ] 

Peter Vary commented on HIVE-17370:
-----------------------------------

Another example:
https://builds.apache.org/job/PreCommit-HIVE-Build/6500/testReport/org.apache.hadoop.hive.cli/TestBeeLineDriver/testCliDriver_create_merge_compressed_/
{code}
PREHOOK: query: select count(1) from tgt_rc_merge_test
PREHOOK: type: QUERY
PREHOOK: Input: test_db_create_merge_compressed@tgt_rc_merge_test
PREHOOK: Output: 
file:/home/hiveptest/35.192.53.198-hiveptest-0/apache-github-source-source/itests/qtest/target/tmp/local_base/scratch/b28ee209-0c8d-43f1-954b-c18175e0b87b/hive_2017-08-23_03-41-54_963_3864419413885540377-21/-mr-10001
Query ID = hiveptest_20170823034154_62ee5e77-f752-4aa9-b09e-dd2edd778f89
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
ExitCodeException exitCode=1: chmod: cannot access 
‘/tmp/hadoop/mapred/staging/user1047316188/.staging/job_local1047316188_0068/job.jar’:
 No such file or directory

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
        at org.apache.hadoop.util.Shell.run(Shell.java:869)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:771)
        at 
org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:506)
        at 
org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:487)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:503)
        at 
org.apache.hadoop.mapreduce.JobResourceUploader.copyJar(JobResourceUploader.java:212)
        at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:166)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:97)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:192)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
        at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:415)
        at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
        at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2172)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1831)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1548)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1303)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1298)
        at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:252)
        at 
org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91)
        at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:344)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Job Submission failed with exception 
'org.apache.hadoop.util.Shell$ExitCodeException(chmod: cannot access 
‘/tmp/hadoop/mapred/staging/user1047316188/.staging/job_local1047316188_0068/job.jar’:
 No such file or directory
)'
{code}

> Some tests are failing with java.io.FileNotFoundException: File 
> file:/tmp/hadoop/mapred/...
> -------------------------------------------------------------------------------------------
>
>                 Key: HIVE-17370
>                 URL: https://issues.apache.org/jira/browse/HIVE-17370
>             Project: Hive
>          Issue Type: Sub-task
>    Affects Versions: 3.0.0
>            Reporter: Peter Vary
>            Assignee: Peter Vary
>
> Mostly independently of the given tests, I found this flakiness in several 
> tests:
> {code}
> 2017-08-21T06:34:26,916  WARN [Thread-7815] mapred.LocalJobRunner: 
> job_local245886038_0060
> java.io.FileNotFoundException: File 
> file:/tmp/hadoop/mapred/staging/hiveptest245886038/.staging/job_local245886038_0060/job.splitmetainfo
>  does not exist
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:635)
>  ~[hadoop-common-2.8.0.jar:?]
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:861)
>  ~[hadoop-common-2.8.0.jar:?]
>       at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:625)
>  ~[hadoop-common-2.8.0.jar:?]
>       at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:435)
>  ~[hadoop-common-2.8.0.jar:?]
>       at 
> org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51)
>  ~[hadoop-mapreduce-client-core-2.8.0.jar:?]
>       at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:534) 
> [hadoop-mapreduce-client-common-2.8.0.jar:?]
> {code}
> One example is:
> https://builds.apache.org/job/PreCommit-HIVE-Build/6469/testReport/org.apache.hadoop.hive.cli/TestAccumuloCliDriver/testCliDriver_accumulo_queries_/
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to