Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Nutch Wiki" for change 
notification.

The "NutchTutorial" page has been changed by RichardLloyd:
http://wiki.apache.org/nutch/NutchTutorial?action=diff&rev1=42&rev2=43

  ## page was renamed from Running Nutch 1.3 with Solr Integration
  ## page was renamed from RunningNutchAndSolr
  ## Lang: En
- 
  == Introduction ==
- 
  Apache Nutch is an open source web crawler written in Java. By using it, we 
can find web page hyperlinks in an automated manner, reduce lots of maintenance 
work, for example checking broken links, and create a copy of all the visited 
pages for searching over. That’s where Apache Solr comes in. Solr is an open 
source full text search framework, with Solr we can search the visited pages 
from Nutch. Luckily, integration between Nutch and Solr is pretty 
straightforward as explained below.
  
  Apache Nutch release 1.3 has Solr integration embedded, greatly simplifying 
Nutch-Solr integration. It also removes the legacy dependence upon both Apache 
Tomcat for running the old Nutch Web Application and upon Apache Lucene for 
indexing. Just download a 1.3 binary release from 
[[http://www.apache.org/dyn/closer.cgi/nutch/|here]].
  
  == Table of Contents ==
  <<TableOfContents(3)>>
-  
+ 
  == Steps ==
- 
  == 1 Setup Nutch from binary distribution ==
- 
   * Unzip your binary Nutch package to $HOME/nutch-1.3
-  * cd $HOME/nutch-1.3/runtime/local 
+  * cd $HOME/nutch-1.3/runtime/local
  
  From now on, we am going to use ${NUTCH_RUNTIME_HOME} to refer to the current 
directory.
  
  == 2. Verify your Nutch installation ==
-  
   * run "bin/nutch" - You can confirm a correct installation if you seeing the 
following:
+ 
  {{{
  Usage: nutch [-core] COMMAND
  }}}
- 
  Some troubleshooting tips:
+ 
   * Run the following command if you are seeing "Permission denied":
+ 
  {{{
  chmod +x bin/nutch
  }}}
   * Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run 
the following command or add it to ~/.bashrc:
+ 
  {{{
  export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
  }}}
- 
  == 3. Crawl your first website ==
- 
-  *  Add your agent name in the value field of the http.agent.name property in 
conf/nutch-site.xml, for example:
+  * Add your agent name in the value field of the http.agent.name property in 
conf/nutch-site.xml, for example:
+ 
  {{{
  <property>
   <name>http.agent.name</name>
@@ -51, +48 @@

  }}}
   * mkdir -p urls
   * create a text file nutch under /urls with the following content (1 url per 
line for each site you want Nutch to crawl).
+ 
  {{{
  http://nutch.apache.org/
  }}}
- * Edit the file conf/regex-urlfilter.txt and replace 
+ * Edit the file conf/regex-urlfilter.txt and replace
+ 
  {{{
  # accept anything else
- +.  
+ +.
  }}}
- 
  with a regular expression matching the domain you wish to crawl. For example, 
if you wished to limit the crawl to the nutch.apache.org domain, the line 
should read:
  
  {{{
-  +^http://([a-z0-9]*\.)*nutch.apache.org/ 
+  +^http://([a-z0-9]*\.)*nutch.apache.org/
- }}} 
+ }}}
- 
  This will include any url in the domain nutch.apache.org.
  
  === 3.1 Using the Crawl Command ===
- 
  Now we are ready to initiate a crawl, use the following parameters:
  
   * '''-dir''' ''dir'' names the directory to put the crawl in.
@@ -77, +73 @@

   * '''-depth''' ''depth'' indicates the link depth from the root page that 
should be crawled.
   * '''-topN''' ''N'' determines the maximum number of pages that will be 
retrieved at each level up to the depth.
   * Run the following command:
+ 
  {{{
  bin/nutch crawl urls -dir crawl -depth 3 -topN 5
  }}}
   * Now you should be able to see the following directories created:
+ 
  {{{
- crawl/crawldb 
+ crawl/crawldb
  Crawl/linkdb
  crawl/segments
  }}}
- 
  '''NOTE''': If you have a Solr core already set up and wish to index to it, 
you are required to add the -solr <solrUrl> parameter to your crawl command e.g.
+ 
  {{{
  bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
  }}}
- If not then please skip to [[#4. Setup Solr for search|here]] for how to set 
up your Solr instance and index your crawl data.
+ If not then please skip to [[#A4._Setup_Solr_for_search|here]] for how to set 
up your Solr instance and index your crawl data.
  
  Typically one starts testing one's configuration by crawling at shallow 
depths, sharply limiting the number of pages fetched at each level (-topN), and 
watching the output to check that desired pages are fetched and undesirable 
pages are not. Once one is confident of the configuration, then an appropriate 
depth for a full crawl is around 10. The number of pages per level (-topN) for 
a full crawl can be from tens of thousands to millions, depending on your 
resources.
  
  === 3.2 Using Individual Commands for Whole-web Crawling ===
- 
  Whole-web crawling is designed to handle very large crawls which may take 
weeks to complete, running on multiple machines.  This also permits more 
control over the crawl process, and incremental crawling.  It is important to 
note that whole web crawling does not necessarily mean crawling the entire 
world wide web.  We can limit a whole web crawl to just a list of the URLs we 
want to crawl.  This is done by using a filter just like we the one we used 
when we did the crawl command (above).
  
  ==== Step-by-Step: Concepts ====
  Nutch data is composed of:
  
   1. The crawl database, or crawldb. This contains information about every url 
known to Nutch, including whether it was fetched, and, if so, when.
-  2. The link database, or linkdb. This contains the list of known links to 
each url, including both the source url and anchor text of the link.
+  1. The link database, or linkdb. This contains the list of known links to 
each url, including both the source url and anchor text of the link.
-  3. A set of segments. Each segment is a set of urls that are fetched as a 
unit. Segments are directories with the following subdirectories:
+  1. A set of segments. Each segment is a set of urls that are fetched as a 
unit. Segments are directories with the following subdirectories:
    * a ''crawl_generate'' names a set of urls to be fetched
    * a ''crawl_fetch'' contains the status of fetching each url
    * a ''content'' contains the raw content retrieved from each url
@@ -128, +125 @@

  }}}
  The parser also takes a few minutes, as it must parse the full file. Finally, 
we initialize the crawl db with the selected urls.
  
- {{{ 
+ {{{
- bin/nutch inject crawldb dmoz 
+ bin/nutch inject crawldb dmoz
  }}}
- 
  Now we have a web database with around 1000 as-yet unfetched URLs in it.
  
  ===== Option 2.  Bootstrapping from an initial seed list. =====
- This option shadows the creation of the seed list as covered [[#3. Crawl your 
first website|here]].
+ This option shadows the creation of the seed list as covered 
[[#A3._Crawl_your_first_website|here]].
  
- {{{ 
+ {{{
- bin/nutch inject crawldb urls 
+ bin/nutch inject crawldb urls
  }}}
- 
  ==== Step-by-Step: Fetching ====
  To fetch, we first generate a fetch list from the database:
  
- {{{ 
+ {{{
- bin/nutch generate crawldb segments 
+ bin/nutch generate crawldb segments
  }}}
- 
  This generates a fetch list for all of the pages due to be fetched. The fetch 
list is placed in a newly created segment directory. The segment directory is 
named by the time it's created. We save the name of this segment in the shell 
variable {{{s1}}}:
  
  {{{
@@ -156, +150 @@

  }}}
  Now we run the fetcher on this segment with:
  
- {{{ 
+ {{{
- bin/nutch fetch $s1 
+ bin/nutch fetch $s1
  }}}
- 
  When this is complete, we update the database with the results of the fetch:
  
- {{{ 
+ {{{
- bin/nutch updatedb crawldb $s1 
+ bin/nutch updatedb crawldb $s1
  }}}
- 
  Now the database contains both updated entries for all initial pages as well 
as new entries that correspond to newly discovered pages linked from the 
initial set.
  
  Then we parse the entries:
  
- {{{ 
+ {{{
- bin/nutch parse $1 
+ bin/nutch parse $1
  }}}
- 
  Now we generate and fetch a new segment containing the top-scoring 1000 pages:
  
  {{{
@@ -201, +192 @@

  ==== Step-by-Step: Invertlinks ====
  Before indexing we first invert all of the links, so that we may index 
incoming anchor text with the pages.
  
- {{{ 
+ {{{
- bin/nutch invertlinks linkdb -dir segments 
+ bin/nutch invertlinks linkdb -dir segments
  }}}
- 
- We are now ready to search with Apache Solr. 
+ We are now ready to search with Apache Solr.
  
  == 4. Setup Solr for search ==
- 
   * download binary file from 
[[http://www.apache.org/dyn/closer.cgi/lucene/solr/|here]]
   * unzip to $HOME/apache-solr-3.X, we will now refer to this as 
${APACHE_SOLR_HOME}
   * cd ${APACHE_SOLR_HOME}/example
   * java -jar start.jar
  
  == 5. Verify Solr installation ==
- 
  After you started Solr admin console, you should be able to access the 
following links:
+ 
  {{{
  http://localhost:8983/solr/admin/
  http://localhost:8983/solr/admin/stats.jsp
  }}}
- 
  == 6. Integrate Solr with Nutch ==
- 
  We have both Nutch and Solr installed and setup correctly. And Nutch already 
created crawl data from the seed url(s). Below are the steps to delegate 
searching to Solr for links to be searchable:
+ 
-  * cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml 
${APACHE_SOLR_HOME}/example/solr/conf/ 
+  * cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml 
${APACHE_SOLR_HOME}/example/solr/conf/
-  * restart Solr with the command “java -jar start.jar” under 
${APACHE_SOLR_HOME}/example 
+  * restart Solr with the command “java -jar start.jar” under 
${APACHE_SOLR_HOME}/example
   * run the Solr Index command:
+ 
  {{{
  bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb crawl/linkdb 
crawl/segments/*
  }}}
  This will send all crawl data to Solr for indexing. For more information 
please see bin/nutch solrindex
-  
+ 
  If all has gone to plan, we are now ready to search with 
http://localhost:8983/solr/admin/.  If you want to see the raw HTML indexed by 
Solr, change the content field definition in solrconfig.xml to:
+ 
  {{{
  <field name="content" type="text" stored="true" indexed="true"/>
  }}}

Reply via email to