[
https://issues.apache.org/jira/browse/MAHOUT-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13969961#comment-13969961
]
Dmitriy Lyubimov commented on MAHOUT-1464:
------------------------------------------
bq. where -cp is what `mahout classpath` returns.
Actually, scratch that. That generally is still a bad recipe. "mahout
classpath" would return installed HADOOP_HOME dependencies (normally one
doesn't want that because spark managed-libs already exposes whatever version
of hadoop it was compiled with), and it neglects to add Spark classpath. So i
don't this 'mahout classpath' is plenty useful here.
BTW that's another thing here -- you need to compile Spark with correct version
of hadoop hdfs you intend to use (at least that's what i do). By default i
think it does a terrible thing.
The main suggestion stands -- collect 'cp' correctly, which idea already does
via maven, but the major hurdle is to do it manually -- and user-friendly
methods for those are not yet present methinks.
> Cooccurrence Analysis on Spark
> ------------------------------
>
> Key: MAHOUT-1464
> URL: https://issues.apache.org/jira/browse/MAHOUT-1464
> Project: Mahout
> Issue Type: Improvement
> Components: Collaborative Filtering
> Environment: hadoop, spark
> Reporter: Pat Ferrel
> Assignee: Sebastian Schelter
> Fix For: 1.0
>
> Attachments: MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch,
> MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, run-spark-xrsj.sh
>
>
> Create a version of Cooccurrence Analysis (RowSimilarityJob with LLR) that
> runs on Spark. This should be compatible with Mahout Spark DRM DSL so a DRM
> can be used as input.
> Ideally this would extend to cover MAHOUT-1422. This cross-cooccurrence has
> several applications including cross-action recommendations.
--
This message was sent by Atlassian JIRA
(v6.2#6252)