[ 
https://issues.apache.org/jira/browse/MAHOUT-372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved MAHOUT-372.
------------------------------

       Resolution: Fixed
    Fix Version/s: 0.4
         Assignee: Sean Owen

Yes, sure there's no particular limit to the number of mappers or reducers. 

These are Hadoop params, which you can set on the command line with, for 
example:
-Dmapred.map.tasks=10 -Dmapred.reduce.tasks=10

Reopen if that doesn't quite answer the question. (We can also discuss on 
mahout-u...@apache.org, perhaps, if this isn't necessarily a bug or enhancement 
request.)

> Partitioning Collaborative Filtering Job into Maps and Reduces
> --------------------------------------------------------------
>
>                 Key: MAHOUT-372
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-372
>             Project: Mahout
>          Issue Type: Question
>          Components: Collaborative Filtering
>    Affects Versions: 0.4
>         Environment: Ubuntu Koala
>            Reporter: Kris Jack
>            Assignee: Sean Owen
>             Fix For: 0.4
>
>
> I am running the org.apache.mahout.cf.taste.hadoop.item.RecommenderJob main 
> on my hadoop cluster and it partitions the job in 2 although I have more than 
> 2 nodes available.  I was reading that the partitioning could be changed by 
> setting the JobConf's conf.setNumMapTasks(int num) and 
> conf.setNumReduceTasks(int num).
> Would I be right in assuming that this would speed up the processing by 
> increasing these, say to 4)?  Can this code be partitioned into many 
> reducers?  If so, would setting them in the protected AbstractJob::JobConf 
> prepareJobConf() function be appropriate?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to