On Sun, Jan 26, 2014 at 9:36 AM, Pat Ferrel p...@occamsmachete.com wrote:
I think I’ll leave dithering out until it goes live because it would seem
to make the eyeball test easier. I doubt all these experiments will survive.
With anti-flood if you turn the epsilon parameter to 1 (makes
Thanks for the answers, actually I worked on a similar issue,
increasing the diversity of top-N lists
(http://link.springer.com/article/10.1007%2Fs10844-013-0252-9).
Clustering-based approaches produce good results and they are very
fast compared to some optimization based techniques. Also it
wrote:
A generic latent variable recommender question.
I passed the user-item matrix through a low rank approximation,
with either something like ALS or SVD, and now I have the feature
vectors for all users and all items.
Case 1:
I want to recommend items to a user.
I compute a dot product
to approximate the rating values.
That's exactly what I was thinking.
Thanks for your reply.
On Sat, Jan 25, 2014 at 5:08 AM, Koobas koo...@gmail.com wrote:
A generic latent variable recommender question.
I passed the user-item matrix through a low rank approximation,
with either something like
are formed to approximate the rating values.
That's exactly what I was thinking.
Thanks for your reply.
On Sat, Jan 25, 2014 at 5:08 AM, Koobas koo...@gmail.com wrote:
A generic latent variable recommender question.
I passed the user-item matrix through a low rank approximation
are formed to approximate the rating values.
That's exactly what I was thinking.
Thanks for your reply.
On Sat, Jan 25, 2014 at 5:08 AM, Koobas koo...@gmail.com wrote:
A generic latent variable recommender question.
I passed the user-item matrix through a low rank approximation
was thinking.
Thanks for your reply.
On Sat, Jan 25, 2014 at 5:08 AM, Koobas koo...@gmail.com wrote:
A generic latent variable recommender question.
I passed the user-item matrix through a low rank approximation,
with either something like ALS or SVD, and now I have the feature
vectors for all
On Sat, Jan 25, 2014 at 4:33 PM, Pat Ferrel p...@occamsmachete.com wrote:
BTW can you explain your notation? s = log r + N(0,log \epsilon)
N?, \epsilon?
r is rank
N is normal distribution
\epsilon is an arbitrary constant that drives the amount of mixing.
Typical values are =4.
variable recommender question.
I passed the user-item matrix through a low rank approximation,
with either something like ALS or SVD, and now I have the feature
vectors for all users and all items.
Case 1:
I want to recommend items to a user.
I compute a dot product of the user’s feature vector
A generic latent variable recommender question.
I passed the user-item matrix through a low rank approximation,
with either something like ALS or SVD, and now I have the feature
vectors for all users and all items.
Case 1:
I want to recommend items to a user.
I compute a dot product of the user’s
Case 1 is fine as is.
For Case 2 I would suggest to simply experiment, try different
similarity measures like euclidean distance or cosine and see what gives
the best results.
--sebastian
On 01/25/2014 04:08 AM, Koobas wrote:
A generic latent variable recommender question.
I passed
On Fri, Jan 24, 2014 at 7:08 PM, Koobas koo...@gmail.com wrote:
I eliminate the ones that the user already has, and find the largest value
among the others, right?
yeah...
Unless you are selling razor blades in which case, you don't eliminate
repeats.
Also, you may want to pass the results
12 matches
Mail list logo