Re: How to work with Coreference resolutions

2012-06-13 Thread Jörn Kottmann

Hello,

the main method is actually a good example on how to use
it. So have a look at it.

Let me know if there are further questions. We still need to update that
part of the documentation. There should be a proper sample and explanation.

Hope that helps,
Jörn

On 06/13/2012 06:42 PM, Carlos Scheidecker wrote:

The opennlp.tools.lang.english.TreebankLinker has a Main method to perform
Coreference resolutions.

One can call that from the command line.

I would like to know how to use that with the Java API. Can anyone provide
an example?

Thanks,

Carlos.

I've also found this explanation here:
http://mail-archives.apache.org/mod_mbox/opennlp-users/201112.mbox/%3c4ed76af3.8020...@gmail.com%3E

I now got it running with 1.5.
You need to do the following.

Use our Parser to parse an input article.

Use the TreebankNameFinder to add names to your parsed article (its now
in the trunk, see OPENNLP-407).
I used this command:
java -cp ... TreebankNameFinder -parse ner/date.bin ner/location.bin
ner/money.bin ner/organization.bin ner/percentage.bin ner/person.bin
ner/time.bin

The names of models does matter. When you get the models from the
website rename them as I did.

Now you need to run the TreebankLinker. It just needs the model
directory and you need to set WNSEARCHDIR
to your wordnet directory, e.g like this -DWNSEARCHDIR=wordnet/dict

Now the TreebankLinker is ready to link mentions together.

Let us know if you have issues to get this running.

Hope this helps,
Jörn





Re: How to work with Coreference resolutions

2012-06-13 Thread Carlos Scheidecker
Thanks. So for now we can only use the models from 1.4. I saw that a
training class was added recently. How do you use that?
Thanks,
Carlos.
On Jun 13, 2012 10:47 AM, "Jörn Kottmann"  wrote:

> Hello,
>
> the main method is actually a good example on how to use
> it. So have a look at it.
>
> Let me know if there are further questions. We still need to update that
> part of the documentation. There should be a proper sample and explanation.
>
> Hope that helps,
> Jörn
>
> On 06/13/2012 06:42 PM, Carlos Scheidecker wrote:
>
>> The opennlp.tools.lang.english.**TreebankLinker has a Main method to
>> perform
>> Coreference resolutions.
>>
>> One can call that from the command line.
>>
>> I would like to know how to use that with the Java API. Can anyone provide
>> an example?
>>
>> Thanks,
>>
>> Carlos.
>>
>> I've also found this explanation here:
>> http://mail-archives.apache.**org/mod_mbox/opennlp-users/**
>> 201112.mbox/%3C4ED76AF3.**8020...@gmail.com%3E
>>
>> I now got it running with 1.5.
>> You need to do the following.
>>
>> Use our Parser to parse an input article.
>>
>> Use the TreebankNameFinder to add names to your parsed article (its now
>> in the trunk, see OPENNLP-407).
>> I used this command:
>> java -cp ... TreebankNameFinder -parse ner/date.bin ner/location.bin
>> ner/money.bin ner/organization.bin ner/percentage.bin ner/person.bin
>> ner/time.bin
>>
>> The names of models does matter. When you get the models from the
>> website rename them as I did.
>>
>> Now you need to run the TreebankLinker. It just needs the model
>> directory and you need to set WNSEARCHDIR
>> to your wordnet directory, e.g like this -DWNSEARCHDIR=wordnet/dict
>>
>> Now the TreebankLinker is ready to link mentions together.
>>
>> Let us know if you have issues to get this running.
>>
>> Hope this helps,
>> Jörn
>>
>>
>


Re: How to work with Coreference resolutions

2012-06-13 Thread Carlos Scheidecker
Jörn,

I would volunteer to write the documentation piece and samples when I
understand how that works. I have seen this script from a book and now I am
in the process of understanding it.

#!/bin/sh
#
# Usage: ./opennlp-coreference.sh < input.txt > coref.txt

OPENNLP_HOME=~gwilcock/Tools/opennlp-tools-1.3.0
export OPENNLP_HOME

WORDNET_HOME=~gwilcock/Tools/wordnet-2.0
export WORDNET_HOME

CLASSPATH=.:\
$OPENNLP_HOME/output/opennlp-tools-1.3.0.jar:\
$OPENNLP_HOME/lib/maxent-2.4.0.jar:\
$OPENNLP_HOME/lib/trove.jar:\
$OPENNLP_HOME/lib/jwnl-1.3.3.jar
export CLASSPATH

java opennlp.tools.lang.english.SentenceDetector \
  $OPENNLP_HOME/models/english/sentdetect/EnglishSD.bin.gz |
java opennlp.tools.lang.english.Tokenizer \
  $OPENNLP_HOME/models/english/tokenize/EnglishTok.bin.gz |
java -Xmx1024m opennlp.tools.lang.english.TreebankParser -d \
  $OPENNLP_HOME/models/english/parser |
java -Xmx1024m opennlp.tools.lang.english.NameFinder -parse \
  $OPENNLP_HOME/models/english/namefind/*.bin.gz |
java -Xmx1024m -DWNSEARCHDIR=$WORDNET_HOME/dict -Duser.language=en \
  opennlp.tools.lang.english.TreebankLinker \
  $OPENNLP_HOME/models/english/coref


On Wed, Jun 13, 2012 at 10:46 AM, Jörn Kottmann  wrote:

> Hello,
>
> the main method is actually a good example on how to use
> it. So have a look at it.
>
> Let me know if there are further questions. We still need to update that
> part of the documentation. There should be a proper sample and explanation.
>
> Hope that helps,
> Jörn
>
>
> On 06/13/2012 06:42 PM, Carlos Scheidecker wrote:
>
>> The opennlp.tools.lang.english.**TreebankLinker has a Main method to
>> perform
>> Coreference resolutions.
>>
>> One can call that from the command line.
>>
>> I would like to know how to use that with the Java API. Can anyone provide
>> an example?
>>
>> Thanks,
>>
>> Carlos.
>>
>> I've also found this explanation here:
>> http://mail-archives.apache.**org/mod_mbox/opennlp-users/**
>> 201112.mbox/%3C4ED76AF3.**8020...@gmail.com%3E
>>
>> I now got it running with 1.5.
>> You need to do the following.
>>
>> Use our Parser to parse an input article.
>>
>> Use the TreebankNameFinder to add names to your parsed article (its now
>> in the trunk, see OPENNLP-407).
>> I used this command:
>> java -cp ... TreebankNameFinder -parse ner/date.bin ner/location.bin
>> ner/money.bin ner/organization.bin ner/percentage.bin ner/person.bin
>> ner/time.bin
>>
>> The names of models does matter. When you get the models from the
>> website rename them as I did.
>>
>> Now you need to run the TreebankLinker. It just needs the model
>> directory and you need to set WNSEARCHDIR
>> to your wordnet directory, e.g like this -DWNSEARCHDIR=wordnet/dict
>>
>> Now the TreebankLinker is ready to link mentions together.
>>
>> Let us know if you have issues to get this running.
>>
>> Hope this helps,
>> Jörn
>>
>>
>


Re: How to work with Coreference resolutions

2012-06-13 Thread Jörn Kottmann

On 06/13/2012 07:07 PM, Carlos Scheidecker wrote:

Thanks. So for now we can only use the models from 1.4. I saw that a
training class was added recently. How do you use that?


Thats still work in progress, on which data do you want to train?

You need to produce data in a certain format, there should be a sample 
in the test folder.

Its basically penn treebank style plus some nodes to label the mentions
in the tree.

The parse trees of a document are grouped and send document wise
to the trainer via a stream. After this is done a new model will be trained.

The OpenNLP corferencer works currently only on noun phrases, other mentions
like verbs will not be resolved (in case you wanna train on OntoNotes).

Jörn




Re: How to work with Coreference resolutions

2012-06-13 Thread Carlos Scheidecker
Jörn,

I just want to know how it works for now. I've following the one from
StanfordNLP as well.

Basically, I want to first know if I just pass raw test to it or if I have
to tag that first. Looks like I need to do POS tag first.

I want to be able to pass a text and get the references as object lists
from the API.

So I can fetch the relations.

I still need to take some time here and read more the source code unless
you have some pointers.

Thanks,

Carlos.



On Wed, Jun 13, 2012 at 11:23 AM, Jörn Kottmann  wrote:

> On 06/13/2012 07:07 PM, Carlos Scheidecker wrote:
>
>> Thanks. So for now we can only use the models from 1.4. I saw that a
>> training class was added recently. How do you use that?
>>
>
> Thats still work in progress, on which data do you want to train?
>
> You need to produce data in a certain format, there should be a sample in
> the test folder.
> Its basically penn treebank style plus some nodes to label the mentions
> in the tree.
>
> The parse trees of a document are grouped and send document wise
> to the trainer via a stream. After this is done a new model will be
> trained.
>
> The OpenNLP corferencer works currently only on noun phrases, other
> mentions
> like verbs will not be resolved (in case you wanna train on OntoNotes).
>
> Jörn
>
>
>


Re: How to work with Coreference resolutions

2012-06-14 Thread Jörn Kottmann

Hello,

the input for the coreference component needs to be preprocessed,
with the sentence detector, tokenizer, parser and name finders.

You can do this via API and our documentation provides sample code for
each of these steps.

The only tricky part is the to get the named entities into the parse tree.
Here is a sample:
Parse parse; // returned from parser
Span personEntites[];  // returned from person name model

Parse.addNames("person", personEntites[fi], parse.getTagNodes());

After this the person names are inserted into the parse tree, you need
to repeat this step for every entity type you would like to reference. 
The "person"
tags are currently hard coded. You can find a list in 
TreebankNameFinder.NAME_TYPES

(I believe thats a trunk only class).

Before you start with the rest you should download all the coreferencer 
models for 1.4

into one directory, similar to the structure on the sever.

Now we are coming to the coreference resolution code:
Linker treebankLinker = new TreebankLinker("/home/joern/corefmodel/", 
LinkerMode.TEST);


This will create the linker for you.

First all the mentions need to be recognized and afterward they are 
linked together.

For every sentence you do this:
Parse p = ...; // contains a parse of a sentence with names
Mention[] extents = treebankLinker.getMentionFinder().getMentions(new 
DefaultParse(p,sentenceNumber));

for (int ei=0,en=extents.length;eiThe result are the mentions per sentence. All these mention objects 
should be copied into a single list,

e.g. via document.addAll(extents) (document is of type List).

Now the mentions of one document can be linked together:
DiscourseEntity[] entities = 
treebankLinker.getEntities(document.toArray(new Mention[document.size()]));


The entities array now contains the various detected and linked 
entities, usually you want to filter out entities
which just have a single mention. The DiscourseEntity groups mentions 
together, a mention must not be an

entity, other noun phrases are valid mentions as well.

Hope that helps,
Jörn


On 06/13/2012 07:41 PM, Carlos Scheidecker wrote:

Jörn,

I just want to know how it works for now. I've following the one from
StanfordNLP as well.

Basically, I want to first know if I just pass raw test to it or if I have
to tag that first. Looks like I need to do POS tag first.

I want to be able to pass a text and get the references as object lists
from the API.

So I can fetch the relations.

I still need to take some time here and read more the source code unless
you have some pointers.

Thanks,

Carlos.



On Wed, Jun 13, 2012 at 11:23 AM, Jörn Kottmann  wrote:


On 06/13/2012 07:07 PM, Carlos Scheidecker wrote:


Thanks. So for now we can only use the models from 1.4. I saw that a
training class was added recently. How do you use that?


Thats still work in progress, on which data do you want to train?

You need to produce data in a certain format, there should be a sample in
the test folder.
Its basically penn treebank style plus some nodes to label the mentions
in the tree.

The parse trees of a document are grouped and send document wise
to the trainer via a stream. After this is done a new model will be
trained.

The OpenNLP corferencer works currently only on noun phrases, other
mentions
like verbs will not be resolved (in case you wanna train on OntoNotes).

Jörn







Re: How to work with Coreference resolutions

2012-06-14 Thread Carlos Scheidecker
Jörn,

Great, that helps quite a lot! It is similar with the main method of that
class but your explanations and the trick of integrating the NERs are the
real silver bullet of your post here. Thanks a bunch.

I will play with that at night.

What I want is to define a few IE algorithms from that.

Is there any references of Information Extraction using OpenNLP that you
would recommend as well?

Thanks,

Carlos.

On Thu, Jun 14, 2012 at 9:14 AM, Jörn Kottmann  wrote:

> Hello,
>
> the input for the coreference component needs to be preprocessed,
> with the sentence detector, tokenizer, parser and name finders.
>
> You can do this via API and our documentation provides sample code for
> each of these steps.
>
> The only tricky part is the to get the named entities into the parse tree.
> Here is a sample:
> Parse parse; // returned from parser
> Span personEntites[];  // returned from person name model
> 
> Parse.addNames("person", personEntites[fi], parse.getTagNodes());
>
> After this the person names are inserted into the parse tree, you need
> to repeat this step for every entity type you would like to reference. The
> "person"
> tags are currently hard coded. You can find a list in
> TreebankNameFinder.NAME_TYPES
> (I believe thats a trunk only class).
>
> Before you start with the rest you should download all the coreferencer
> models for 1.4
> into one directory, similar to the structure on the sever.
>
> Now we are coming to the coreference resolution code:
> Linker treebankLinker = new TreebankLinker("/home/joern/**corefmodel/",
> LinkerMode.TEST);
>
> This will create the linker for you.
>
> First all the mentions need to be recognized and afterward they are linked
> together.
> For every sentence you do this:
> Parse p = ...; // contains a parse of a sentence with names
> Mention[] extents = treebankLinker.**getMentionFinder().**getMentions(new
> DefaultParse(p,sentenceNumber)**);
> for (int ei=0,en=extents.length;ei  if (extents[ei].getParse() == null) {
>Parse snp = new Parse(p.getText(),extents[ei].**getSpan(),"NML",1.0,0);
>p.insert(snp);
>extents[ei].setParse(new DefaultParse(snp, sentenceNumber));
>  }
> }
> sentenceNumber++;
>
> The result are the mentions per sentence. All these mention objects should
> be copied into a single list,
> e.g. via document.addAll(extents) (document is of type List).
>
> Now the mentions of one document can be linked together:
> DiscourseEntity[] entities = treebankLinker.getEntities(**document.toArray(new
> Mention[document.size()]));
>
> The entities array now contains the various detected and linked entities,
> usually you want to filter out entities
> which just have a single mention. The DiscourseEntity groups mentions
> together, a mention must not be an
> entity, other noun phrases are valid mentions as well.
>
> Hope that helps,
> Jörn
>
>
>
> On 06/13/2012 07:41 PM, Carlos Scheidecker wrote:
>
>> Jörn,
>>
>> I just want to know how it works for now. I've following the one from
>> StanfordNLP as well.
>>
>> Basically, I want to first know if I just pass raw test to it or if I have
>> to tag that first. Looks like I need to do POS tag first.
>>
>> I want to be able to pass a text and get the references as object lists
>> from the API.
>>
>> So I can fetch the relations.
>>
>> I still need to take some time here and read more the source code unless
>> you have some pointers.
>>
>> Thanks,
>>
>> Carlos.
>>
>>
>>
>> On Wed, Jun 13, 2012 at 11:23 AM, Jörn Kottmann
>>  wrote:
>>
>>  On 06/13/2012 07:07 PM, Carlos Scheidecker wrote:
>>>
>>>  Thanks. So for now we can only use the models from 1.4. I saw that a
 training class was added recently. How do you use that?

  Thats still work in progress, on which data do you want to train?
>>>
>>> You need to produce data in a certain format, there should be a sample in
>>> the test folder.
>>> Its basically penn treebank style plus some nodes to label the mentions
>>> in the tree.
>>>
>>> The parse trees of a document are grouped and send document wise
>>> to the trainer via a stream. After this is done a new model will be
>>> trained.
>>>
>>> The OpenNLP corferencer works currently only on noun phrases, other
>>> mentions
>>> like verbs will not be resolved (in case you wanna train on OntoNotes).
>>>
>>> Jörn
>>>
>>>
>>>
>>>
>


Re: How to work with Coreference resolutions

2012-06-21 Thread Jörn Kottmann

On 06/14/2012 08:59 PM, Carlos Scheidecker wrote:

Is there any references of Information Extraction using OpenNLP that you
would recommend as well?


OpenNLP provides some base functionality like, tokenization,
sentence detection, chunking, parsing, NER.
At least some of these are necessary preprocessing
for IE.

It would be nice to have IE related components in OpenNLP.

Jörn