On Mar 19, 2009, at 12:03 PM, Sami Siren wrote:

Andrzej Bialecki wrote:
How about the following: we build just 2 packages:
* binary: this includes only base hadoop libs in lib/ (enough to start a local job, no optional filesystems etc), the *.job and *.war files and scripts. Scripts would check for the presence of plugins/ dir, and offer an option to create it from *.job. Assumption here is that this shouldbe enough to run full cycle in local mode, and that people who want to run a distributed cluster will first install a plain Hadoop release, and then just put the *.job and bin/nutch on the master. * source: no build artifacts, no .svn (equivalent to svn export), simple tgz.


this sounds good to me. additionally some new documentation needs to be written too.

Distributed is a little more complicated than just dropping *.job and bin/nutch on a hadoop install. Will this even work unless one edits config/<stuff> and builds a new .job? Anyone using distributed nutch probably wouldn't be interested in something trivial so a step-by- step config how-to would probably be a good idea.

Eric

--
Eric J. Christeson <eric.christe...@ndsu.edu>
Enterprise Computing and Infrastructure    (701) 231-8693 (Voice)
North Dakota State University

Reply via email to