[jira] [Commented] (NUTCH-2442) Injector to stop if job fails to avoid loss of CrawlDb

2017-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16239222#comment-16239222
 ] 

ASF GitHub Bot commented on NUTCH-2442:
---

sebastian-nagel commented on a change in pull request #239: NUTCH-2442 Injector 
to stop if job fails to avoid loss of CrawlDb
URL: https://github.com/apache/nutch/pull/239#discussion_r148940951
 
 

 ##
 File path: src/java/org/apache/nutch/util/ProtocolStatusStatistics.java
 ##
 @@ -122,8 +122,17 @@ public int run(String[] args) throws Exception {
 job.setNumReduceTasks(numOfReducers);
 
 try {
-  job.waitForCompletion(true);
-} catch (Exception e) {
+  boolean success = job.waitForCompletion(true);
+  if(!success){
 
 Review comment:
   Please use consistent and uniform formatting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Injector to stop if job fails to avoid loss of CrawlDb
> --
>
> Key: NUTCH-2442
> URL: https://issues.apache.org/jira/browse/NUTCH-2442
> Project: Nutch
>  Issue Type: Bug
>  Components: injector
>Affects Versions: 1.13
>Reporter: Sebastian Nagel
>Priority: Critical
> Fix For: 1.14
>
>
> Injector does not check whether the MapReduce job is successful. Even if the 
> job fails
> - installs the CrawlDb
> -- move current/ to old/
> -- replace current/ with an empty or potentially incomplete version
> - exits with code 0 so that scripts running the crawl workflow cannot detect 
> the failure -- if Injector is run a second time the CrawlDb is lost (both 
> current/ and old/ are empty or corrupted)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NUTCH-2442) Injector to stop if job fails to avoid loss of CrawlDb

2017-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16239121#comment-16239121
 ] 

ASF GitHub Bot commented on NUTCH-2442:
---

Omkar20895 opened a new pull request #239: NUTCH-2442 Injector to stop if job 
fails to avoid loss of CrawlDb
URL: https://github.com/apache/nutch/pull/239
 
 
   - Added Job status checks in the classes: Injector, ReadHostDb, 
CrawlCompletionStats, ProtocolStatusStatistics, SitemapProcessor and 
DomainStatistics. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Injector to stop if job fails to avoid loss of CrawlDb
> --
>
> Key: NUTCH-2442
> URL: https://issues.apache.org/jira/browse/NUTCH-2442
> Project: Nutch
>  Issue Type: Bug
>  Components: injector
>Affects Versions: 1.13
>Reporter: Sebastian Nagel
>Priority: Critical
> Fix For: 1.14
>
>
> Injector does not check whether the MapReduce job is successful. Even if the 
> job fails
> - installs the CrawlDb
> -- move current/ to old/
> -- replace current/ with an empty or potentially incomplete version
> - exits with code 0 so that scripts running the crawl workflow cannot detect 
> the failure -- if Injector is run a second time the CrawlDb is lost (both 
> current/ and old/ are empty or corrupted)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NUTCH-2242) lastModified not always set

2017-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16239113#comment-16239113
 ] 

ASF GitHub Bot commented on NUTCH-2242:
---

Omkar20895 closed pull request #238: NUTCH-2242 Injector to stop if job fails 
to avoid loss of CrawlDb
URL: https://github.com/apache/nutch/pull/238
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/java/org/apache/nutch/crawl/Injector.java 
b/src/java/org/apache/nutch/crawl/Injector.java
index 5f5fd15ff..0603eb4d1 100644
--- a/src/java/org/apache/nutch/crawl/Injector.java
+++ b/src/java/org/apache/nutch/crawl/Injector.java
@@ -414,7 +414,16 @@ public void inject(Path crawlDb, Path urlDir, boolean 
overwrite,
 
 try {
   // run the job
-  job.waitForCompletion(true);
+  boolean success = job.waitForCompletion(true);
+  if (!success) {
+String message = "Injector job did not succeed, job status: "
++ job.getStatus().getState() + ", reason: "
++ job.getStatus().getFailureInfo();
+LOG.error(message);
+cleanupAfterFailure(tempCrawlDb, lock, fs);
+// throw exception so that calling routine can exit with error
+throw new RuntimeException(message);
+  }
 
   // save output and perform cleanup
   CrawlDb.install(job, crawlDb);
@@ -452,11 +461,21 @@ public void inject(Path crawlDb, Path urlDir, boolean 
overwrite,
 LOG.info("Injector: finished at " + sdf.format(end) + ", elapsed: "
 + TimingUtil.elapsedTime(start, end));
   }
-} catch (IOException e) {
+} catch (IOException | InterruptedException | ClassNotFoundException e) {
+  LOG.error("Injector job failed", e);
+  cleanupAfterFailure(tempCrawlDb, lock, fs);
+  throw e;
+}
+  }
+
+  public void cleanupAfterFailure(Path tempCrawlDb, Path lock, FileSystem fs)
+ throws IOException {
+try{
   if (fs.exists(tempCrawlDb)) {
-fs.delete(tempCrawlDb, true);
+  fs.delete(tempCrawlDb, true);
   }
-  LockUtil.removeLockFile(conf, lock);
+  LockUtil.removeLockFile(fs, lock);
+} catch(IOException e) {
   throw e;
 }
   }
diff --git a/src/java/org/apache/nutch/hostdb/ReadHostDb.java 
b/src/java/org/apache/nutch/hostdb/ReadHostDb.java
index 28a7eb709..257dd6a75 100644
--- a/src/java/org/apache/nutch/hostdb/ReadHostDb.java
+++ b/src/java/org/apache/nutch/hostdb/ReadHostDb.java
@@ -202,8 +202,17 @@ private void readHostDb(Path hostDb, Path output, boolean 
dumpHomepages, boolean
 job.setNumReduceTasks(0);
 
 try {
-  job.waitForCompletion(true);
-} catch (Exception e) {
+  boolean success = job.waitForCompletion(true);
+  if(!success){
+String message = "ReadHostDb job did not succeed, job status: "
++ job.getStatus().getState() + ", reason: "
++ job.getStatus().getFailureInfo();
+LOG.error(message);
+// throw exception so that calling routine can exit with error
+throw new RuntimeException(message);
+  }
+} catch (IOException | InterruptedException | ClassNotFoundException e) {
+  LOG.error("ReadHostDb job failed", e);
   throw e;
 }
 
diff --git a/src/java/org/apache/nutch/util/CrawlCompletionStats.java 
b/src/java/org/apache/nutch/util/CrawlCompletionStats.java
index 4920fbf32..4b8e1871f 100644
--- a/src/java/org/apache/nutch/util/CrawlCompletionStats.java
+++ b/src/java/org/apache/nutch/util/CrawlCompletionStats.java
@@ -171,8 +171,17 @@ public int run(String[] args) throws Exception {
 job.setNumReduceTasks(numOfReducers);
 
 try {
-  job.waitForCompletion(true);
-} catch (Exception e) {
+  boolean success = job.waitForCompletion(true);
+  if(!success){
+String message = jobName + " job did not succeed, job status: "
++ job.getStatus().getState() + ", reason: "
++ job.getStatus().getFailureInfo();
+LOG.error(message);
+// throw exception so that calling routine can exit with error
+throw new RuntimeException(message);
+  }
+} catch (IOException | InterruptedException | ClassNotFoundException e) {
+  LOG.error(jobName + " job failed");
   throw e;
 }
 
diff --git a/src/java/org/apache/nutch/util/ProtocolStatusStatistics.java 
b/src/java/org/apache/nutch/util/ProtocolStatusStatistics.java
index a18860634..84d892251 100644
--- a/src/java/org/apache/nutch/util/ProtocolStatusStatistics.java
+++ b/src/java/org/apache/nutch/util/ProtocolStatusStatistics.java
@@ -122,8 +122,17 @@ public int run(String[] args) throws Exception {
 job.setNumReduceTasks(numOfReducers);
 
 try {
-  

[jira] [Commented] (NUTCH-2242) lastModified not always set

2017-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16239112#comment-16239112
 ] 

ASF GitHub Bot commented on NUTCH-2242:
---

Omkar20895 commented on issue #238: NUTCH-2242 Injector to stop if job fails to 
avoid loss of CrawlDb
URL: https://github.com/apache/nutch/pull/238#issuecomment-341913968
 
 
   Closing the PR as there was a typo in the commit and it has been assigned to 
NUTCH-2242 rather than NUTCH-2442. Apologies. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> lastModified not always set
> ---
>
> Key: NUTCH-2242
> URL: https://issues.apache.org/jira/browse/NUTCH-2242
> Project: Nutch
>  Issue Type: Bug
>  Components: crawldb
>Affects Versions: 1.11
>Reporter: Jurian Broertjes
>Priority: Minor
> Fix For: 1.13
>
> Attachments: NUTCH-2242.patch
>
>
> I observed two issues:
> - When using the DefaultFetchSchedule, CrawlDatum's modifiedTime field is not 
> updated on the first successful fetch. 
> - When a document modification is detected (protocol- or signature-wise), the 
> modifiedTime isn't updated
> I can provide a patch later today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NUTCH-2242) lastModified not always set

2017-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-2242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16239106#comment-16239106
 ] 

ASF GitHub Bot commented on NUTCH-2242:
---

Omkar20895 opened a new pull request #238: NUTCH-2242 Injector to stop if job 
fails to avoid loss of CrawlDb
URL: https://github.com/apache/nutch/pull/238
 
 
   - Added Job status checks in the classes: Injector, ReadHostDb, 
CrawlCompletionStats, ProtocolStatusStatistics, SitemapProcessor and 
DomainStatistics. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> lastModified not always set
> ---
>
> Key: NUTCH-2242
> URL: https://issues.apache.org/jira/browse/NUTCH-2242
> Project: Nutch
>  Issue Type: Bug
>  Components: crawldb
>Affects Versions: 1.11
>Reporter: Jurian Broertjes
>Priority: Minor
> Fix For: 1.13
>
> Attachments: NUTCH-2242.patch
>
>
> I observed two issues:
> - When using the DefaultFetchSchedule, CrawlDatum's modifiedTime field is not 
> updated on the first successful fetch. 
> - When a document modification is detected (protocol- or signature-wise), the 
> modifiedTime isn't updated
> I can provide a patch later today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (NUTCH-2383) Wrong FS exception in Fetcher

2017-11-04 Thread Sebastian Nagel (JIRA)

 [ 
https://issues.apache.org/jira/browse/NUTCH-2383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Nagel resolved NUTCH-2383.

Resolution: Not A Problem

Thanks [~yossi] for reporting this problem. Closing this as it can hardly be 
solved inside Nutch: it's clear that the default value "local" of 
{{mapreduce.framework.name}} does not allow to access hdfs:// paths. It's 
defined in 
[mapred-default.xml|https://hadoop.apache.org/docs/r2.7.2/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml]
 and should be set appropriately in mapred-site.xml which is not controlled by 
Nutch. It needs to be configured when setting up the Hadoop cluster. Please 
reopen if you see any option to fix this inside Nutch. Thanks!

> Wrong FS exception in Fetcher
> -
>
> Key: NUTCH-2383
> URL: https://issues.apache.org/jira/browse/NUTCH-2383
> Project: Nutch
>  Issue Type: Bug
>  Components: fetcher
>Affects Versions: 1.13
> Environment: Hadoop 2.8 and Hadoop 2.7.2
>Reporter: Yossi Tamari
>Priority: Major
> Attachments: crawl output.txt
>
>
> Running bin/crawl on either Hadoop 2.7.2 or Hadoop 2.8, the Injector and 
> Generator succeed, but the Fetcher throws: 
> {code}java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://localhost:9000/user/root/crawl/segments/20170430084337/crawl_fetch, 
> expected: file:///{code}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)