OK,

yet another crazy idea of mine.

Generally, we we coerce to classical SVD form with singular values,
then Tikhonov regularization can be probably optimized
post-decomposition. Indeed, i can see no reason why we can't control
the smoothing at the prediction stage by hacking the predictor as in
following. Consequently, if we can, then we can also optimize degree
of smoothiness on hold out data after decomposition is done, perhaps
even cross-fold. but we don't have to rerun it again and again.
Finally, a hack for ALS-WR is enclosed too, but in this case this is
more intuitive than derived. i guess i can try it out in R.

It is still possible that it is a complete nonsense of course.

https://docs.google.com/open?id=0B883AxfQlYWANDllNWQ1ZDQtOTEzOS00MWM3LWI4MjItNDQ3MDg2ZWMzMmE3

Reply via email to