by 'ClassName', which class are you actually refering to?
the class in which the LexicalParser is invoked?

in my code, the class that implements the parser is named 'parse'
and this is the code that i used.
  lp = new LexicalizedParser(new ObjectInputStream(new
GZIPInputStream(parse.class.getResourceAsStream("/englishPCFG.ser.gz"))));

the program runs to completion and map-reduce process is declared as
successfully completed everytime even if the code is changed to 
lp = new LexicalizedParser(new ObjectInputStream(new
GZIPInputStream(parse.class.getResourceAsStream("/englishPCF_G.ser.gz"))));

this indicates that the getResourceAsStream does throw an exception even if
the file is not present, i guess.

any ideas? :confused:


Kevin Peterson-3 wrote:
> 
> On Sat, Apr 18, 2009 at 5:18 AM, hari939  wrote:
> 
>>
>> My project of parsing through material for a semantic search engine
>> requires
>> me to use the  http://nlp.stanford.edu/software/lex-parser.shtml Stanford
>> NLP parser  on hadoop cluster.
>>
>> To use the Stanford NLP parser, one must create a lexical parser object
>> using a englishPCFG.ser.gz file as a constructor's parameter.
>> i have tried loading the file onto the Hadoop dfs in the /user/root/
>> folder
>> and have also tried packing the file along with the jar of the java
>> program.
> 
> 
> Use getResourceAsStream to read it from the jar.
> 
> Use the ObjectInputStream constructor.
> 
> That is, new LexicalizedParser(new ObjectInputStream(new
> GzipInputStream(ClassName.class.getResourceAsStream("/englishPCFG.ser.gz")))
> 
> I'm interested to know if you have found any other open source parsers in
> Java or at least have java bindings.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Using-the-Stanford-NLP-with-hadoop-tp23112316p24231349.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to