Hi Tentri,

Evaluation in IR primary carried out by traditional TREC-style (also referred 
to as Cranfield paradigm) evaluation methodology.
The evaluation methodology requires a document collection, a set of information 
needs (called topics or queries), and a set of query relevance judgments 
(qrels) (right answers).

qrels is the most labor expensive since it requires human assessors.

Once you have these three elements, then you can use evaluation scripts such as 
http://trec.nist.gov/trec_eval/trec_eval_latest.tar.gz
to calculate effectiveness metrics (recall, precision, etc).

See for an example :
http://www-personal.umich.edu/~kevynct/trec-web-2014/
ahmet



On Wednesday, May 18, 2016 3:37 PM, Tentri Oktaviani 
<tentrioktavi...@gmail.com> wrote:
Hi solr users,

My final task on college is making a search engine. I'm using solr to
access and retrieve data from ontology which later will be used as
corpuses. I'm entirely new to these (information retrieval, ontology,
python and solr) things.

There's a step in information retrieval to evaluate the query result. I'm
planning to use Precision, Recall, and ROC score to evaluate this. Is there
any way I can use function in solr to calculate the score of precision,
recall, and ROC? From solr interface or even the codes behind is doesn't
matter.

Thank you in advance.

Regards,
Tentri

Reply via email to