We do cross-validation tests to see how well the model predicts actual 
behavior. As to the best data mix, cross-validation works with any engine 
tuning or data input. Typically this requires re-traiing between test runs so 
make sure you use exatly the same training/test split. If you want to examine 
the usefulness of different events you can compare event-type 1 to event type 1 
+ event type 2 etc. This is made easier by inputting all events, then using a 
test trick in the UR to mask out any combination of events for the 
cross-validation, using the single existing model so no need to re-train for 
this type of analysis. We have an un-supported script that does this but I warn 
you that you are on your own using it. 

https://github.com/actionml/analysis-tools 
<https://github.com/actionml/analysis-tools>


On Sep 6, 2017, at 6:15 AM, Saarthak Chandra <chandra.saart...@gmail.com> wrote:

Hi,

With the Universal Recommender,

1. How can we validate the model after we train and deploy it?

2. How can we find an appropriate method of data mixing ??

Thanks
-- 
Saarthak Chandra,
Masters in Computer Science,
Cornell University.

-- 
You received this message because you are subscribed to the Google Groups 
"actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to actionml-user+unsubscr...@googlegroups.com 
<mailto:actionml-user+unsubscr...@googlegroups.com>.
To post to this group, send email to actionml-u...@googlegroups.com 
<mailto:actionml-u...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/actionml-user/CAJHqc1rMSDD6w1WGxKkHqvVUGY9%2B3RfOOdtmUqY6C3Ew361TfA%40mail.gmail.com
 
<https://groups.google.com/d/msgid/actionml-user/CAJHqc1rMSDD6w1WGxKkHqvVUGY9%2B3RfOOdtmUqY6C3Ew361TfA%40mail.gmail.com?utm_medium=email&utm_source=footer>.
For more options, visit https://groups.google.com/d/optout 
<https://groups.google.com/d/optout>.

Reply via email to