Greetings,

There's a way you can distribute files along with your MR job as part
of a "payload", or you could save the file in the same spot on every
machine of your cluster with some rsyncing, and hard-code loading it.

This may be of some help:
http://hadoop.apache.org/core/docs/r0.18.2/api/org/apache/hadoop/filecache/DistributedCache.html

On Sat, Apr 18, 2009 at 5:18 AM, hari939 <hari...@gmail.com> wrote:
>
> My project of parsing through material for a semantic search engine requires
> me to use the  http://nlp.stanford.edu/software/lex-parser.shtml Stanford
> NLP parser  on hadoop cluster.
>
> To use the Stanford NLP parser, one must create a lexical parser object
> using a englishPCFG.ser.gz file as a constructor's parameter.
> i have tried loading the file onto the Hadoop dfs in the /user/root/ folder
> and have also tried packing the file along with the jar of the java program.
>
> i am new to the hadoop platform and am not very familiar with some of the
> salient features of hadoop.
>
> looking forward to any form of help.
> --
> View this message in context: 
> http://www.nabble.com/Using-the-Stanford-NLP-with-hadoop-tp23112316p23112316.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>

Reply via email to