[ 
https://issues.apache.org/jira/browse/MAHOUT-884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13151074#comment-13151074
 ] 

Lance Norskog commented on MAHOUT-884:
--------------------------------------

bq. Then this should be a map-reduce job, not a sequential process, as these 
matrices could be really large.
Ah! But how are they stored? Is it an HDFS directory with part-r-00000 , 00001 
... 0000n for n distinct sets of rows?

bq. Identity mapper + reduce-side join with concatenation would be the most 
straightforward scalable way to do it.

The trick is that we want the vectors to come in right-to-left order at each 
reducer, so that the output vector writes sequentially. See in Ricky Ho's blog 
page, search for "Optimized reducer-side join". He uses a partitioner to 
achieve this.

[http://horicky.blogspot.com/2010/08/designing-algorithmis-for-map-reduce.html]
                
> Matrix Concatenate utility
> --------------------------
>
>                 Key: MAHOUT-884
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-884
>             Project: Mahout
>          Issue Type: New Feature
>          Components: Integration
>            Reporter: Lance Norskog
>            Priority: Minor
>         Attachments: MAHOUT-884.patch, MAHOUT-884.patch
>
>
> Utility to concatenate matrices stored as SequenceFiles of vectors.
> Each pair in the SequenceFile is the IntWritable row number and a 
> VectorWritable.
> The input and output files may skip rows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to