Partitioning Collaborative Filtering Job into Maps and Reduces
--------------------------------------------------------------

                 Key: MAHOUT-372
                 URL: https://issues.apache.org/jira/browse/MAHOUT-372
             Project: Mahout
          Issue Type: Question
          Components: Collaborative Filtering
    Affects Versions: 0.4
         Environment: Ubuntu Koala
            Reporter: Kris Jack


I am running the org.apache.mahout.cf.taste.hadoop.item.RecommenderJob main on 
my hadoop cluster and it partitions the job in 2 although I have more than 2 
nodes available.  I was reading that the partitioning could be changed by 
setting the JobConf's conf.setNumMapTasks(int num) and 
conf.setNumReduceTasks(int num).

Would I be right in assuming that this would speed up the processing by 
increasing these, say to 4)?  Can this code be partitioned into many reducers?  
If so, would setting them in the protected AbstractJob::JobConf 
prepareJobConf() function be appropriate?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to