Hello dear group.
Kenneth proposed my two ways of solution for creating evaluation for complimentary purchase template.

1) create the evaluator based on the recommendation as reference. main difference you dont need to filter high rating to define your actual output.

---> To that variant, there is a WIP MR already,  I understand, that I cant count on it be reviewed asap, so the second way Kenneth proposed me:

2) write your own evaluation logic separately using other language. . eg use pio to run batch prediction, dump it to a file and create another file which contain actual item bought together. write a script to read both files and generate evaluation metrics

-----> To that variant - I will try to implement ist meanwhile, but before I start I would like to ask you, if I understood right, what Kenneth proposed me with w):

So:
I created plan for variant 2 (after long research I decided to use 80/20 cross validation with MAP@OK)

1. shuffle the prepared_json with training data
2. split result of 1 in two files: one with first 80& of data, another with last 20% of data and save both.
3. clone complimentary again and call the folder MyComplimentaryPurchaseEval
4. build engine and train the model with the 80%  file.

5. create a file for batch prediction, containing
lines like {"items" : ["s2i1"], "num" : 3}
This batch predict input file has to contain all items, which are in "80% file"

6. Run batch prediction for "80% file" (which was used for training) and persist result of batch prediction for "80%" file separately 7. Run batch prediction for the "untrained" "20%" file (so for the data which was NOT used for the model training) and persist the prediction result as well. 8 write php script which compares file from 6. and 7. calculates MAP@OK and outputs it.

Please, be so kind and tell me, if my plan for the way 2) is correct.

Thank you in advance,

Alexey


Reply via email to