On 1/7/12 2:22 PM, Grant Ingersoll wrote:
Being able to take advantage of other classifiers seems like it would be a
really nice thing to be able to do. I'd love to put OpenNLP over Mahout or
others.
Besides, for testing purposes, if you could plugin the existing capability
versus your new rewrite (in Scala) then you could easily compare the two. I
can't imagine the abstraction layer is more than a few interfaces or abstract
classes plus a bit of configuration/injection/fill in the blank that allows one
to specify the implementation.
Yes, we need plug-able classifiers and support for extensive
modification/extension of
our existing components. You are welcome to help us with that.
One way of implementing this is to specify a (optional) factory class
during training
which is used to create a model (classifier). A second type of factory
class could
be specified to modify a component.
These factory class names will be stored in our zip model package, and can
then be used to instantiated the extensions which are necessary to run the
component.
The disadvantage of this approach is that it might not work well with OSGi.
A big advantage is that OpenNLP itself will take care of configuring
everything
and the code needed to run an OpenNLP component is identical, even if
the model
uses "custom" extensions. These must only be on the class path.
Jörn