As clarification, here are the relevant papers. The approach for
explicit feedback [1] does not use unobserved cells, only the approch
for handling implicit feedback [2] does, but weighs them down.

/s

[1] "Large-scale Parallel Collaborative Filtering for the Netflix Prize"
http://www.hpl.hp.com/personal/Robert_Schreiber/papers/2008%20AAIM%20Netflix/netflix_aaim08(submitted).pdf

[2] "Collaborative Filtering for Implicit Feedback Datasets"
http://research.yahoo.com/pub/2433


On 25.03.2013 14:51, Dmitriy Lyubimov wrote:
> On Mar 25, 2013 6:44 AM, "Sean Owen" <sro...@gmail.com> wrote:
>>
>> (The unobserved entries are still in the loss function, just with low
>> weight. They are also in the system of equations you are solving for.)
> 
> Not in the classic alswr paper i was specifically referring to. It actually
> uses minors of observations with unobserved rows or columns  thrown out.
> The solution you are often referring to, the implicit feedback, indeed does
> not since it is using a different observation encoding technique.
> 
>>
>> On Mon, Mar 25, 2013 at 1:38 PM, Dmitriy Lyubimov <dlie...@gmail.com>
> wrote:
>>> Classic als wr is bypassing underlearning problem by cutting out unrated
>>> entries from linear equations system. It also still has a fery defined
>>> regularization technique which allows to find optimal fit in theory (but
>>> still not in mahout, not without at least some additional sweat, i
> heard).
> 

Reply via email to