Ah, thank you, I had actually forgotten about this and this is indeed
probably a difference. This is from the other paper I cited:
http://www.hpl.hp.com/personal/Robert_Schreiber/papers/2008%20AAIM%20Netflix/netflix_aaim08(submitted).pdf

It's the "WR" in "ALS-WR" -- weighted regularization. I suppose the
intuition is that you penalize complex explanations of prolific users
and items proportionally more.

The paper claims it helps and I also found it did. That could be the difference.
--
Sean Owen | Director, Data Science | London


On Thu, Mar 13, 2014 at 2:30 AM, Michael Allman <m...@allman.ms> wrote:
> Hi Sean,
>
> Digging deeper I've found another difference between Oryx's implementation
> and Spark's. Why do you adjust lambda here?
>
> https://github.com/cloudera/oryx/blob/master/als-common/src/main/java/com/cloudera/oryx/als/common/factorizer/als/AlternatingLeastSquares.java#L491
>
> Cheers,
>
> Michael
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/possible-bug-in-Spark-s-ALS-implementation-tp2567p2636.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to