[jira] [Commented] (NUTCH-1645) Junit Test Case for Adaptive Fetch Schedule class

2013-10-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/NUTCH-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13787571#comment-13787571
 ] 

Yasin Kılınç commented on NUTCH-1645:
-

if you have any idea, will you share with us. How do I write Unit Test?

> Junit Test Case for Adaptive Fetch Schedule class
> -
>
> Key: NUTCH-1645
> URL: https://issues.apache.org/jira/browse/NUTCH-1645
> Project: Nutch
>  Issue Type: Test
>Affects Versions: 2.2.1
>Reporter: Talat UYARER
>Priority: Minor
> Fix For: 2.3
>
> Attachments: NUTCH-1645.patch
>
>
> Currently there is not Test Case for Adaptive Fetch Schedule. Junit test 
> Writes for its. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (NUTCH-1645) Junit Test Case for Adaptive Fetch Schedule class

2013-10-06 Thread lufeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/NUTCH-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufeng updated NUTCH-1645:
--

Attachment: NUTCH-1645-v2.patch

add two test case, one is use default parameters and another without open sync 
delta. 

thanks Yasin, you can add another test case with some parameter change.  

> Junit Test Case for Adaptive Fetch Schedule class
> -
>
> Key: NUTCH-1645
> URL: https://issues.apache.org/jira/browse/NUTCH-1645
> Project: Nutch
>  Issue Type: Test
>Affects Versions: 2.2.1
>Reporter: Talat UYARER
>Priority: Minor
> Fix For: 2.3
>
> Attachments: NUTCH-1645.patch, NUTCH-1645-v2.patch
>
>
> Currently there is not Test Case for Adaptive Fetch Schedule. Junit test 
> Writes for its. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (NUTCH-1650) Adaptive Fetch Scheduler interval Wrong Set

2013-10-06 Thread lufeng (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13787664#comment-13787664
 ] 

lufeng commented on NUTCH-1650:
---

yes , this code in Nutch 1.x is correct. +1

> Adaptive Fetch Scheduler interval Wrong Set
> ---
>
> Key: NUTCH-1650
> URL: https://issues.apache.org/jira/browse/NUTCH-1650
> Project: Nutch
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Talat UYARER
>Priority: Minor
>  Labels: scheduler
> Fix For: 2.3
>
> Attachments: NUTCH-1650.patch
>
>
> After calculation interval time when setting it didn't check between max and 
> min values.  Moreover if sync_delta is true. Interval set before changes. 
> This patch fix this.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (NUTCH-1588) Port NUTCH-1245 URL gone with 404 after db.fetch.interval.max stays db_unfetched in CrawlDb and is generated over and over again to 2.x

2013-10-06 Thread Sebastian Nagel (JIRA)

[ 
https://issues.apache.org/jira/browse/NUTCH-1588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13787763#comment-13787763
 ] 

Sebastian Nagel commented on NUTCH-1588:


Ok, the patch is a port of the fix for 1.x (NUTCH-1245) and should work. Code 
style guidelines are not followed: [~talat], can you format the code 
accordingly (using 
[eclipse-codeformat.xml|http://svn.apache.org/viewvc/nutch/branches/2.x/eclipse-codeformat.xml],
 see 
[[1|http://wiki.apache.org/nutch/Becoming_A_Nutch_Developer#Step_Three:_Using_the_JIRA_and_Developing]]).
 Thanks!

> Port NUTCH-1245 URL gone with 404 after db.fetch.interval.max stays 
> db_unfetched in CrawlDb and is generated over and over again to 2.x
> ---
>
> Key: NUTCH-1588
> URL: https://issues.apache.org/jira/browse/NUTCH-1588
> Project: Nutch
>  Issue Type: Bug
>Reporter: Lewis John McGibbney
>Priority: Critical
> Fix For: 2.3
>
> Attachments: NUTCH-1588.patch
>
>
> A document gone with 404 after db.fetch.interval.max (90 days) has passed
> is fetched over and over again but although fetch status is fetch_gone
> its status in CrawlDb keeps db_unfetched. Consequently, this document will
> be generated and fetched from now on in every cycle.
> To reproduce:
> # create a CrawlDatum in CrawlDb which retry interval hits 
> db.fetch.interval.max (I manipulated the shouldFetch() in 
> AbstractFetchSchedule to achieve this)
> # now this URL is fetched again
> # but when updating CrawlDb with the fetch_gone the CrawlDatum is reset to 
> db_unfetched, the retry interval is fixed to 0.9 * db.fetch.interval.max (81 
> days)
> # this does not change with every generate-fetch-update cycle, here for two 
> segments:
> {noformat}
> /tmp/testcrawl/segments/20120105161430
> SegmentReader: get 'http://localhost/page_gone'
> Crawl Generate::
> Status: 1 (db_unfetched)
> Fetch time: Thu Jan 05 16:14:21 CET 2012
> Modified time: Thu Jan 01 01:00:00 CET 1970
> Retries since fetch: 0
> Retry interval: 6998400 seconds (81 days)
> Metadata: _ngt_: 1325776461784_pst_: notfound(14), lastModified=0: 
> http://localhost/page_gone
> Crawl Fetch::
> Status: 37 (fetch_gone)
> Fetch time: Thu Jan 05 16:14:48 CET 2012
> Modified time: Thu Jan 01 01:00:00 CET 1970
> Retries since fetch: 0
> Retry interval: 6998400 seconds (81 days)
> Metadata: _ngt_: 1325776461784_pst_: notfound(14), lastModified=0: 
> http://localhost/page_gone
> /tmp/testcrawl/segments/20120105161631
> SegmentReader: get 'http://localhost/page_gone'
> Crawl Generate::
> Status: 1 (db_unfetched)
> Fetch time: Thu Jan 05 16:16:23 CET 2012
> Modified time: Thu Jan 01 01:00:00 CET 1970
> Retries since fetch: 0
> Retry interval: 6998400 seconds (81 days)
> Metadata: _ngt_: 1325776583451_pst_: notfound(14), lastModified=0: 
> http://localhost/page_gone
> Crawl Fetch::
> Status: 37 (fetch_gone)
> Fetch time: Thu Jan 05 16:20:05 CET 2012
> Modified time: Thu Jan 01 01:00:00 CET 1970
> Retries since fetch: 0
> Retry interval: 6998400 seconds (81 days)
> Metadata: _ngt_: 1325776583451_pst_: notfound(14), lastModified=0: 
> http://localhost/page_gone
> {noformat}
> As far as I can see it's caused by setPageGoneSchedule() in 
> AbstractFetchSchedule. Some pseudo-code:
> {code}
> setPageGoneSchedule (called from update / CrawlDbReducer.reduce):
> datum.fetchInterval = 1.5 * datum.fetchInterval // now 1.5 * 0.9 * 
> maxInterval
> datum.fetchTime = fetchTime + datum.fetchInterval // see NUTCH-516
> if (maxInterval < datum.fetchInterval) // necessarily true
>forceRefetch()
> forceRefetch:
> if (datum.fetchInterval > maxInterval) // true because it's 1.35 * 
> maxInterval
>datum.fetchInterval = 0.9 * maxInterval
> datum.status = db_unfetched // 
> shouldFetch (called from generate / Generator.map):
> if ((datum.fetchTime - curTime) > maxInterval)
>// always true if the crawler is launched in short intervals
>// (lower than 0.35 * maxInterval)
>datum.fetchTime = curTime // forces a refetch
> {code}
> After setPageGoneSchedule is called via update the state is db_unfetched and 
> the retry interval 0.9 * db.fetch.interval.max (81 days). 
> Although the fetch time in the CrawlDb is far in the future
> {noformat}
> % nutch readdb testcrawl/crawldb -url http://localhost/page_gone
> URL: http://localhost/page_gone
> Version: 7
> Status: 1 (db_unfetched)
> Fetch time: Sun May 06 05:20:05 CEST 2012
> Modified time: Thu Jan 01 01:00:00 CET 1970
> Retries since fetch: 0
> Retry interval: 6998400 seconds (81 days)
> Score: 1.0
> Signature: null
> Metadata: _pst_: notfound(14), lastModified=0: http://localhost/page_gone
> {noformat}
> the URL is generated again because (fetch time - c