GitHub user kottmann opened a pull request:
https://github.com/apache/opennlp/pull/238
Revert merging of sentiment work, no consent to merge it
Thank you for contributing to Apache OpenNLP.
In order to streamline the review of the contribution we ask you
to ensure the fo
Github user kottmann closed the pull request at:
https://github.com/apache/opennlp/pull/238
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is e
Hi Joern,
I’m confused. Why did you revert my commit?
Every one of those check points you put on the PR was checked?
We have been discussing this for months, you have seen the
code for months, Ana and I have worked diligently on the code
in plain view of everyone.
Please explain.
Chris
On
Hi everyone,
I spoke with Joern in Slack. Some of his concerns are:
1. This was done with a Merge commit and apparently they squash and rebase.
[would be helpful to see some pointer on this for documentation, thus far I
haven’t found any]
2. Apparently we literally need to ask others for +1 vot
Hi All,
First, let me take a share of blame for the comment Chris mentioned. I
believe I said something like the pull request was X revision behind and Y
revisions ahead. It was not meant to be rude, it was meant to say it is hard
to review code when it is so different from the current code
Hello Chris,
could you please point me to files I can use to train the sentiment
component? I am currently looking again through the code and would
like to train it myself.
Jörn
On Tue, Jun 27, 2017 at 4:59 PM, Dan Russ wrote:
> Hi All,
>First, let me take a share of blame for the comment C
Absolutely you can find it here:
opennlp-tools/src/test/resources/opennlp/tools/sentiment/sample_train_categ
(for categorical /multi-class)
opennlp-tools/src/test/resources/opennlp/tools/sentiment/sample_train_categ2
(for categorical/multi-class)
We can also do similar files where instead of mu
Which data sets did you use to evaluate this?
I was looking for a bit more than a sample file to train it.
I noticed that you checked in stanford and netflix models.
The stanford data set is probably this one:
http://ai.stanford.edu/~amaas/data/sentiment/
Do you have a link for the netflix data?
Hey Joern,
Sure, you can find the model data links here, along with our evaluation of them.
http://irds.usc.edu/SentimentAnalysisParser/datasets.html
There are other evaluations here:
http://irds.usc.edu/SentimentAnalysisParser/models.html
The HT provider review I cannot contribute at this tim
Hi Chris,
I have been interested in the new sentiment component for a while,
although truth to be told, I did not follow that closely. I have today
looked at it and test it with some of the corpora you have mentioned.
In order to do that, I checkout master to work with from this commit
onwards
ht
Hi Rodrigo,
This is very useful feedback that I wish we would have had a long time ago.
I will look into it and see if I can reproduce the CLI error. I did a full
build and mvn
install (which I though would run tests?) before commiting and as I posted in
JIRA
the tests passed for me? So I wil
Hi Chris,
On Thu, Jun 29, 2017 at 7:10 PM, Chris Mattmann wrote:
> Hi Rodrigo,
>
> This is very useful feedback that I wish we would have had a long time ago.
>
> I will look into it and see if I can reproduce the CLI error. I did a full
> build and mvn
> install (which I though would run tests?
Thanks for your feedback Rodrigo!
Cheers,
Chris
On 6/29/17, 10:14 AM, "Rodrigo Agerri" wrote:
Hi Chris,
On Thu, Jun 29, 2017 at 7:10 PM, Chris Mattmann wrote:
> Hi Rodrigo,
>
> This is very useful feedback that I wish we would have had a long time
ago.
>
>
On Thu, Jun 29, 2017 at 1:10 PM, Chris Mattmann wrote:
> Hi Rodrigo,
>
> This is very useful feedback that I wish we would have had a long time ago.
>
> I will look into it and see if I can reproduce the CLI error. I did a full
> build and mvn
> install (which I though would run tests?) before co
For 2. I would like to suggest that we implement doccat format support
to train on that data.
3. it would be best so think about how we want to test the doccat
component, today we don't have any tests which use lots of data to
evaluate it.
Probably the sentitment data could solve this for us and a
One more thing, in case we check in models for unit tests we need to
be able to train them again, we might not support those models forever
and then it would be bad if we can't use the tests anymore or need to
repair them by hand.
Jörn
On Thu, Jun 29, 2017 at 7:18 PM, Joern Kottmann wrote:
> For
Thanks I will investigate the below thanks Joern. Can someone send me some
pointers
to the Doc Cat API that I can find? Thanks.
On 6/29/17, 10:18 AM, "Joern Kottmann" wrote:
For 2. I would like to suggest that we implement doccat format support
to train on that data.
3. it w
Absolutely reproducibility is important and I agree.
Thanks,
Chris
On 6/29/17, 10:38 AM, "Joern Kottmann" wrote:
One more thing, in case we check in models for unit tests we need to
be able to train them again, we might not support those models forever
and then it would be bad i
http://opennlp.apache.org/docs/1.8.0/manual/opennlp.html#tools.doccat
On Thu, Jun 29, 2017 at 1:51 PM, Chris Mattmann wrote:
> Thanks I will investigate the below thanks Joern. Can someone send me some
> pointers
> to the Doc Cat API that I can find? Thanks.
>
>
>
>
> On 6/29/17, 10:18 AM, "Joer
Thanks bro
On 6/29/17, 10:52 AM, "Suneel Marthi" wrote:
http://opennlp.apache.org/docs/1.8.0/manual/opennlp.html#tools.doccat
On Thu, Jun 29, 2017 at 1:51 PM, Chris Mattmann wrote:
> Thanks I will investigate the below thanks Joern. Can someone send me some
> pointer
Hi Daniel,
Thanks for your message. In actuality I was not really calling out
your specific comment (and FWIW, we addressed it by getting the PR
up to the same revision and not having 1000s of files modified, but
just ~32 or so). So I wasn’t looking for any flattery there per-se.
Please take the
21 matches
Mail list logo