[ 
https://issues.apache.org/jira/browse/NUTCH-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417870#comment-16417870
 ] 

ASF GitHub Bot commented on NUTCH-2518:
---------------------------------------

Omkar20895 commented on a change in pull request #307: NUTCH-2518 Cleaning up 
the file system after a job failure.
URL: https://github.com/apache/nutch/pull/307#discussion_r177841518
 
 

 ##########
 File path: src/java/org/apache/nutch/crawl/CrawlDb.java
 ##########
 @@ -129,17 +129,23 @@ public void update(Path crawlDb, Path[] segments, 
boolean normalize,
       LOG.info("CrawlDb update: Merging segment data into db.");
     }
 
+    FileSystem fs = crawlDb.getFileSystem(getConf());
+    Path outPath = FileOutputFormat.getOutputPath(job);
     try {
-      int complete = job.waitForCompletion(true)?0:1;
+      boolean success = job.waitForCompletion(true);
+      if (!success) {
+        String message = "Crawl job did not succeed, job status:"
+            + job.getStatus().getState() + ", reason: "
+            + job.getStatus().getFailureInfo();
+        LOG.error(message);
+        cleanupAfterFailure(outPath, lock, fs);
+        throw new RuntimeException(message);
+      }
     } catch (IOException | InterruptedException | ClassNotFoundException e) {
-      FileSystem fs = crawlDb.getFileSystem(getConf());
-      LockUtil.removeLockFile(fs, lock);
-      Path outPath = FileOutputFormat.getOutputPath(job);
-      if (fs.exists(outPath))
-        fs.delete(outPath, true);
+      LOG.error("Crawl job failed ", e);
 
 Review comment:
   Why not shift the sub routine cleanup to the class 
nutch/src/java/org/apache/nutch/util/NutchJob.java? This is a nutch job utils 
file, I think this would be more appropriate. I have cross checked that cleanup 
sub routine would be useful outside crawldb like in hostdb, indexer etc. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Must check return value of job.waitForCompletion()
> --------------------------------------------------
>
>                 Key: NUTCH-2518
>                 URL: https://issues.apache.org/jira/browse/NUTCH-2518
>             Project: Nutch
>          Issue Type: Bug
>          Components: crawldb, fetcher, generator, hostdb, linkdb
>    Affects Versions: 1.15
>            Reporter: Sebastian Nagel
>            Assignee: Kenneth McFarland
>            Priority: Blocker
>             Fix For: 1.15
>
>
> The return value of job.waitForCompletion() of the new MapReduce API 
> (NUTCH-2375) must always be checked. If it's not true, the job has been 
> failed or killed. Accordingly, the program
> - should not proceed with further jobs/steps
> - must clean-up temporary data, unlock CrawlDB, etc.
> - exit with non-zero exit value, so that scripts running the crawl workflow 
> can handle the failure
> Cf. NUTCH-2076, NUTCH-2442, [NUTCH-2375 PR 
> #221|https://github.com/apache/nutch/pull/221#issuecomment-332941883].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to