On Tue, Jun 18, 2013 at 3:48 AM, Ted Dunning wrote:
> I have found that in practice, don't-like is very close to like. That is,
> things that somebody doesn't like are very closely related to the things
> that they do like.
I guess it makes sense for cancellations. i guess it should become pre
Dear colleagues,
we are pleased to announce RepSys, a workshop on Reproducibility and
Replication that will be held in ACM RecSys 2013. This workshop aims to
provide an opportunity to discuss about the limitations and challenges
of experimental reproducibility and replication.
Hope you find
Koren, Volinsky: "CF for implicit feedback datasets"
On Tue, Jun 18, 2013 at 8:07 AM, Pat Ferrel wrote:
> They are on a lot of papers, which are you looking at?
>
> On Jun 17, 2013, at 6:30 PM, Dmitriy Lyubimov wrote:
>
> (Kinda doing something very close. )
>
> Koren-Volynsky paper on implici
I'm suggesting using numbers like -1 for thumbs-down ratings, and then
using these as a positive weight towards 0, just like positive values
are used as positive weighting towards 1.
Most people don't make many negative ratings. For them, what you do
with these doesn't make a lot of difference. It
They are on a lot of papers, which are you looking at?
On Jun 17, 2013, at 6:30 PM, Dmitriy Lyubimov wrote:
(Kinda doing something very close. )
Koren-Volynsky paper on implicit feedback can be generalized to decompose
all input into preference (0 or 1) and confidence matrices (which is
essentu
To your point Ted, I was surprised to find that remove-from-cart actions
predicted sales almost as well as purchases did but it also meant filtering
from recs. We got the best scores treating them as purchases and not
recommending them again. No one pried enough to get get bothered.
In this par
Hi,
Thanks. It did help.
Regards,
Anand.C
-Original Message-
From: Suneel Marthi [mailto:suneel_mar...@yahoo.com]
Sent: Tuesday, June 18, 2013 10:55 AM
To: Chandra Mohan, Ananda Vel Murugan; user@mahout.apache.org
Subject: Re: Feature vector generation from Bag-of-Words
Check this l
For low dimension problems with limited data, you will be much happier with
something like R for clustering and visualization.
On Tue, Jun 18, 2013 at 11:52 AM, syed kather wrote:
> Hi Team
>How to do the K Mean Clustering on 2 selected Columns
>
>
>
> Line No,age,income,sex,city
> 1,22,15
I have found that in practice, don't-like is very close to like. That is,
things that somebody doesn't like are very closely related to the things
that they do like. Things that are quite distant wind up as don't-care,
not don't-like.
This makes most simple approaches to modeling polar preferenc
Hi,
I implemented something similar in the following way.
Created a class which implements
org.apache.commons.math3.ml.clustering.Clusterable with just two member
variables double[] point and long id and geter/setter function.
Iterated through the data and created instances of this class. A
Hi Team
How to do the K Mean Clustering on 2 selected Columns
Line No,age,income,sex,city
1,22,1500,1,xxx,
2,54,13450,2,yyy
-
-
-
-
-
Like this Input Goes . But i need to do Clustering on Columns 2 and 3
How to do that ?
I had tried using synthatic kmean Means But i am not able to extract
Yes the model has no room for literally negative input. I think that
conceptually people do want negative input, and in this model,
negative numbers really are the natural thing to express that.
You could give negative input a small positive weight. Or extend the
definition of c so that it is mere
12 matches
Mail list logo