Hi Kameron, 

I'm not sure if I understand your question correctly. I'm writing a named 
entity recognition system for text excerpts from the social/public domain: 
blogs, news, etc. I'm testing different approaches with rules and ML and I need 
to evaluate annotations accuracy (in terms of f-score against a gold corpus). 
My plan is to use the MASC corpus or build a custom one but the first task is 
to find the right tools for evaluation.

Regards,
Yasen

P.S. I come from the GATE world and I believe UIMA will give me better 
performance and more options for distribution and parallel processing.


________________________________
 From: Kameron Cole <kameronc...@us.ibm.com>
To: user@uima.apache.org 
Cc: Peggy Zagelow <a...@us.ibm.com>; William C Rollow <wcrol...@us.ibm.com> 
Sent: Monday, November 5, 2012 6:22 PM
Subject: Re: f-score evaluation tool?
 

what can you tell me about you f-score annotations? I'm assuming, of course.  
are you writng annotators to calculate f-scores from medical texts?

Best Regards,



________________________________
  
KAMERON ARTHUR COLE  Miami Beach, FL  
Technical Solution Architect  United States 
IBM Content and Predictive Analytics for Healthcare  
IBM Global Business Services Center of Excellence        
IBM US Federal       
E-mail: kameronc...@us.ibm.com 
Work (cell): +1-305-389-8512 
Fax: +1-845-491-4052 
Twitter: @kameroncoleibm 
My Blog: Enterprise Linguistics 
Buy My Book  
Yasen Kiprov ---11/05/2012 09:27:49 AM---Hello,


 

To 
 


cc 
 


Subject 
    
Hello,

I'm trying to setup a test environment where I can compare collections of 
annotated documents in terms of precision, recall and f-scores. Is there any 
easy-to-use tool for comparing analysed documents in the available UIMA xml 
formats?

I'm familiar with the GATE corpus evaluation tools so a CAS consumer which 
outputs documents in the GATE xml format could also be a solution. Does anyone 
know about such an open-source tool?

Thank you and all the best,
Yasen

Reply via email to