Wait! I thought there was only one RecommenderJob?

On Thu, Mar 15, 2012 at 3:44 AM, Sean Owen <sro...@gmail.com> wrote:
> You would still need to use the 'job' file generated by the build to
> get an artifact with all the dependencies.
> You don't need to add Guava as a dependency; it already is one. It's
> the job file that you're missing.
>
> There are two RecommenderJobs. One is what I call pseudo-distributed,
> yes. The other is a fully distributed item-based recommender.
>
> Sean
>
> On Thu, Mar 15, 2012 at 10:01 AM, Janina <mail4jan...@googlemail.com> wrote:
>> Thanks for your fast answer.
>>
>> I haven't added the jar manually, but by adding the dependency to the
>> pom.xml. I tried it with and without the dependendy and with different
>> versions of the dependency, but it remained the same error message.
>>
>> But the RecommenderJob is meant to be to run a pseudo distributed
>> recommender on a Hadoop cluster? Am I guessing something wrong? Or do I
>> have another possibility to run recommendations on a Hadoop Cluster? I have
>> read that only the clustering and classification parts of mahout are really
>> able to be distributed on a hadoop cluster.
>>
>> 2012/3/15 Sean Owen <sro...@gmail.com>
>>
>>> You shouldn't have to add anything to your jar, if you use the
>>> supplied 'job' file which contains all transitive dependencies.
>>> If you do add your own jars, I think you need to unpack and repack
>>> them, not put them into the overall jar as a jar file, even with a
>>> MANIFEST.MF entry. I am not sure that works on Hadoop.
>>>
>>> On Thu, Mar 15, 2012 at 9:42 AM, Janina <mail4jan...@googlemail.com>
>>> wrote:
>>> > Hi all,
>>> >
>>> > I am trying to run a RecommenderJob from a Java program. I have added the
>>> > files users.txt and input.txt to a Hadoop VM and use the run-method of
>>> > RecommenderJob to start the calculation. But the the following error
>>> > message occurs while running the MapReducer:
>>> >
>>> > Error: java.lang.ClassNotFoundException:
>>> com.google.common.primitives.Longs
>>> > at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>> >  at java.security.AccessController.doPrivileged(Native Method)
>>> > at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>> >  at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>>> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>> >  at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>>> > at
>>> >
>>> org.apache.mahout.cf.taste.hadoop.TasteHadoopUtils.idToIndex(TasteHadoopUtils.java:61)
>>> >  at
>>> >
>>> org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:48)
>>> > at
>>> >
>>> org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:31)
>>> >  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>>> > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
>>> >  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>>> > at org.apache.hadoop.mapred.Child.main(Child.java:170)
>>> >
>>> > I have added the required guava-r09.jar explicitly to my jar which also
>>> > lays on the hadoop cluster.
>>> > This may be a stupid question, but does anyone know where this error
>>> comes
>>> > from? This would help me a lot.
>>> >
>>> > Thanks and greetings,
>>> > Janina
>>>



-- 
Lance Norskog
goks...@gmail.com

Reply via email to