[ 
https://issues.apache.org/jira/browse/MAHOUT-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13973288#comment-13973288
 ] 

Dmitriy Lyubimov commented on MAHOUT-1464:
------------------------------------------

the idea most likely willl have screwed hadoop dependency because by default it 
inherits it from thath of Mahout's default dependency. I used to run this stuff 
(or rather our internal variant of this stuff) from my own project which has a 
very strict control over dependendencies (esp. hadoop dependencies).  I also 
inserted a CDH4 profile to spark module which overrides Mahout's default hadoop 
dependency, and that should help -- but it is still a pain, i gave up on 
running it from idea with Mahout maven dependencies. Something is screwed there 
in the end.

i don't experiment with RSJ yet -- i guess i will leave it to Sebastian at this 
point .

what i do is running the following script on my "shell" branch in github via 
mahout shell

{code:title="simple.mscala"} 
val a = dense((1,2,3),(3,4,5))
val drmA = drmParallelize(a,numPartitions = 2)
val drmAtA = drmA.t %*% drmA

val r = drmAtA.mapBlock() {
  case (keys, block) =>
    block += 1.0
    keys -> block
}.checkpoint(/*StorageLevel.NONE*/)

r.collect

// local write
r.writeDRM("file:///home/dmitriy/A")

// hdfs write
r.writeDRM("hdfs://localhost:11010/A")
{code}


which actually runs totally fine in local mode, and _sometimes_  also runs ok 
in "standalone"/hdfs mode but sometimes there are strange after-effects of 
hangs and bailing out with OOM when run on remote cluster with "standalone". 

I am pretty sure it is either dependency issues again in Mahout maven build, or 
something that has happened to Spark 0.9.x release.  Spark 0.6.x -- 0.8.x 
releases and earlier had absolutely no trouble working with hdfs sequence files.

> Cooccurrence Analysis on Spark
> ------------------------------
>
>                 Key: MAHOUT-1464
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1464
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Collaborative Filtering
>         Environment: hadoop, spark
>            Reporter: Pat Ferrel
>            Assignee: Sebastian Schelter
>             Fix For: 1.0
>
>         Attachments: MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, 
> MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, run-spark-xrsj.sh
>
>
> Create a version of Cooccurrence Analysis (RowSimilarityJob with LLR) that 
> runs on Spark. This should be compatible with Mahout Spark DRM DSL so a DRM 
> can be used as input. 
> Ideally this would extend to cover MAHOUT-1422. This cross-cooccurrence has 
> several applications including cross-action recommendations. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to