Hi,

I think there is no common default implementation for this, but everyone
has its own implementation. For the normal uses cases this can be
implemented easily. However, it gets complicated if you need fancy
features, e.g., comparing different levels of complex feature values.


Normally, you have something like:

1. CasReader for providing the expected gold annotations

2. AnnotationCopier for moving the annotations to a gold view

3. Annotators of your pipeline for created the annotations

4. AnnotationComparator for comparing the new annotation with the
annotations of the gold view

5. EvaluationWriter for aggregating and storing the evaluation result


I personally do not use the Ruta evaluation anymore since it does not
provide enough features and our evaluations are integrated in your maven
build as integration tests, thus no need for the Ruta GUI/Annotation
Testing View.


Best,


Peter


Am 17.03.2018 um 20:31 schrieb Nicolas Paris:
> Hello,
>
> The RUTA workbench Annotation Test is a great tool to evaluate the
> performances of a RUTA script based on a gold standard.
>
> Is there any existing tool to measure the performances based on one
> input/ouput xmi folders?
>
> I guess it is feasible to hack the RUTA workbench by running uimafit
> pipelines from ruta, but since my pipeline has many annotators engines,
> this looks complicated to do.
>
> Thanks,

-- 
Peter Klügl
R&D Text Mining/Machine Learning

Averbis GmbH
Tennenbacher Str. 11
79106 Freiburg
Germany

Fon: +49 761 708 394 0
Fax: +49 761 708 394 10
Email: peter.klu...@averbis.com
Web: https://averbis.com

Headquarters: Freiburg im Breisgau
Register Court: Amtsgericht Freiburg im Breisgau, HRB 701080
Managing Directors: Dr. med. Philipp Daumke, Dr. Kornél Markó

Reply via email to