Hi Nocolas!

Thanks for the explanation. But now I have other doubts :-)

1) How can I set the policy to do with the intancias arriving?
2) I read, but I can not enteder as xxx works. How can I test the instances
rating without having a classification model generated through the training
instances? Could you explain how this works? Please!
3) If I merge the two datasets (training and test) how to get just the
performance metrics for the test dataset?

Thankfully,
Eduardo.

2016-08-01 12:22 GMT-03:00 Nicolas Kourtellis <[email protected]>:

> Hi Eduardo,
>
> As far as I understand, you can define your policy of what to do with the
> instances arriving.
> In the PrequentialEvaluator for example, all instances are used for testing
> and then training.
> However, the results of the evaluation are outputed every -f instances
> (parameter defined at command line).
>
> Regarding the different datasets, i assume you can join two different
> datasets (for train and then test), as long as they have same attributes
> structure. Then you could define the model to output test performance after
> the test instances are parsed.
>
> Hope this helps,
>
> Nicolas
>
>
>
> On Mon, Aug 1, 2016 at 12:53 AM, Eduardo Costa <[email protected]>
> wrote:
>
> > Dear,
> > I can use two datasets: one for training and one for test with Samoa as I
> > do in Weka?
> > Another doubt: as SAMOA know what are the instances for testing and
> > training?
> >
> > Thankfully,
> > Eduardo.
> >
>
>
>
> --
> Nicolas Kourtellis
>

Reply via email to