thank you very much, i didn't see this tool, i'll definitely try it.
Clearly better to have such a specific instrument.



2018-05-10 18:36 GMT+02:00 Pat Ferrel <p...@occamsmachete.com>:

> You can if you want but we have external tools for the UR that are much
> more flexible. The UR has tuning that can’t really be covered by the built
> in API. https://github.com/actionml/ur-analysis-tools They do MAP@k as
> well as creating a bunch of other metrics and comparing different types of
> input data. They use a running UR to make queries against.
>
>
> From: Marco Goldin <markomar...@gmail.com> <markomar...@gmail.com>
> Reply: user@predictionio.apache.org <user@predictionio.apache.org>
> <user@predictionio.apache.org>
> Date: May 10, 2018 at 7:52:39 AM
> To: user@predictionio.apache.org <user@predictionio.apache.org>
> <user@predictionio.apache.org>
> Subject:  UR evaluation
>
> hi all, i successfully trained a universal recommender but i don't know
> how to evaluate the model.
>
> Is there a recommended way to do that?
> I saw that *predictionio-template-recommender* actually has
> the Evaluation.scala file which uses the class *PrecisionAtK *for the
> metrics.
> Should i use this template to implement a similar evaluation for the UR?
>
> thanks,
> Marco Goldin
> Horizons Unlimited s.r.l.
>
>

Reply via email to