[
https://issues.apache.org/jira/browse/MAHOUT-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13968783#comment-13968783
]
Dmitriy Lyubimov commented on MAHOUT-1464:
------------------------------------------
That's odd. Honestly I don't know and never encountered that. Maybe it is
something the program itself does, not Spark? stacktrace or log with some
sort of complaint would be helpful.
I know that with Mesos supervision, SPARK_HOME must be the same on all
nodes (driver including). But i think this is only specfic to mesos setup.
Standalone back should be able to handle locations.
I think i gave an explanation for this already. Mostly, because it assumes
the jar is all it takes to run the program. but it takes entire mahout to
run a distribution. And because it still doesnt pass master to the program.
IMO there's no real advantage of doing this vs. running a standalone
application (perhaps with exception when you are running from remote and
slowly connected client and want to disconnect while task still running).
> Cooccurrence Analysis on Spark
> ------------------------------
>
> Key: MAHOUT-1464
> URL: https://issues.apache.org/jira/browse/MAHOUT-1464
> Project: Mahout
> Issue Type: Improvement
> Components: Collaborative Filtering
> Environment: hadoop, spark
> Reporter: Pat Ferrel
> Assignee: Sebastian Schelter
> Fix For: 1.0
>
> Attachments: MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch,
> MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, run-spark-xrsj.sh
>
>
> Create a version of Cooccurrence Analysis (RowSimilarityJob with LLR) that
> runs on Spark. This should be compatible with Mahout Spark DRM DSL so a DRM
> can be used as input.
> Ideally this would extend to cover MAHOUT-1422. This cross-cooccurrence has
> several applications including cross-action recommendations.
--
This message was sent by Atlassian JIRA
(v6.2#6252)