You can ignore alpha. In the implicit feedback version, the loss
function weights squared errors for each cell differently. The weight
is "1 + alpha * r" where r is the user-item value (could be a rating)
and is usually a positive integer. alpha is high-ish, like 40 in the
paper. So it suggests tha
Hi Pat,
ParallelALSFactorizationJob actually implements two different flavours
of matrix factorization, one that is aimed at explicit feedback data
(such as ratings):
"Large-scale Parallel Collaborative Filtering for the Netflix Prize" [1]
and another one that is aimed at using implicit feedback
What is the intuition regarding the choice or tuning of the ALS params?
Job-Specific Options:
--lambda lambda regularization
parameter
--implicitFeedback implicitFeedback