Hi All,
I am trying to start the spark shell and follow instruction on the
following page:
http://mahout.apache.org/users/sparkbindings/play-with-shell.html
I installed spark and created the spark-env.sh file with the following
content.
export SPARK_LOCAL_IP=localhost
export
Hi All,
Are there any (dis)advantages of using tri-factorization (||X - USV'||) as
opposed to bi-factorization ((||X - UV'||)) for recommender systems? I have
been reading a lot about tri-factorization and how they can be seen as
co-clustering of rows and columns and was wondering if such as
Thanks Ted! Will look into it.
Rohit
On Wed, Oct 1, 2014 at 1:04 AM, Ted Dunning ted.dunn...@gmail.com wrote:
Here is a paper that includes an analysis of voting patterns using LDA.
http://arxiv.org/pdf/math/0604410.pdf
On Tue, Sep 30, 2014 at 7:04 PM, Parimi Rohit rohit.par...@gmail.com
delete singletons, it is likely
to get significantly smaller.
I think that something like LDA might work much better for you. It was
designed to work on small data like this.
On Tue, Sep 30, 2014 at 11:13 AM, Parimi Rohit rohit.par...@gmail.com
wrote:
Ted, Thanks for your response. Following
files (this
shuld happen before running seqdirectory).
To convert .sgm to text run - $MAHOUT
org.apache.lucene.benchmark.utils.ExtractReuters ${WORK_DIR}/reuters-sgm
${WORK_DIR}/reuters-out
Then run seqdirectory on the output of the previous step.
On Mon, Jun 23, 2014 at 6:43 PM, Parimi
Hi All,
I am trying to run LDA from Mahout and as a first step I wanted to run the
SequenceFilesFromDirectory job to convert the text files into sequence
files. Following is the command I am using:
hadoop jar
and cluster.
Kind regards,
Barrie
2013/9/12 Gokhan Capan gkhn...@gmail.com
You might also need to build mahout using hadoop-0.23 profile with
hadoop.version parameter set to your hadoop version.
Gokhan
On Tue, Sep 10, 2013 at 10:09 PM, Parimi Rohit rohit.par...@gmail.com
wrote
:34 PM, Parimi Rohit rohit.par...@gmail.com wrote:
Hi All,
I am used to running mahout (mahout-core-0.9-SNAPSHOT-job.jar) in the
Apache Hadoop environment, however, we had to switch to Cloudera
distribution.
When I try to run the item based collaborative filtering job
Hi All,
I was wondering if there is any experimental design to tune the parameters
of ALS algorithm in mahout, so that we can compare its recommendations with
recommendations from another algorithm.
My datasets have implicit data and would like to use the following design
for tuning the ALS
Hi All,
I am used to running mahout (mahout-core-0.9-SNAPSHOT-job.jar) in the
Apache Hadoop environment, however, we had to switch to Cloudera
distribution.
When I try to run the item based collaborative filtering job
(org.apache.mahout.cf.taste.hadoop.item.RecommenderJob) in the Cloudera
Congratulations Sean!!
On Tue, Jul 16, 2013 at 10:18 AM, Harshit Bapna hrba...@gmail.com wrote:
Congratulations Sean for the now job and finding the perfect partner
Cloudera.
On Tue, Jul 16, 2013 at 10:06 AM, Suneel Marthi suneel_mar...@yahoo.com
wrote:
Congrats Sean!!!
Hi All,
Is there a way to compute precision and recall values given a file of
recommendations and a test file of user preferences.
I know there is GenericRecommenderIRStatsEvaluator in Mahout to compute
the IR Stats but it takes a RecommenderBuilder object among others as
parameters to build a
response.
Rohit
On Thu, May 30, 2013 at 1:14 PM, Parimi Rohit rohit.par...@gmail.com
wrote:
Hi All,
Is there a way to compute precision and recall values given a file of
recommendations and a test file of user preferences.
I know there is GenericRecommenderIRStatsEvaluator
13 matches
Mail list logo