Thanks, [1] is a very good reference for ClassNotFoundException.
As you say in [2] the mahout script does not accept hadoop job
parameters in all cases although I hope it does in the future,
especially where a parameter to the job is a classname (seq2sparse for
instance).
What I had to do wa
Pe 15.03.2012 15:51, Pat Ferrel a scris:
Can you elaborate on how to set these classpaths?
The job takes a classpath param, (from mahout --help)
-libjars comma separated jar files to include in
the classpath.
is the needed code bundled with the job or does the path need to be
available to had
Wait! I thought there was only one RecommenderJob?
On Thu, Mar 15, 2012 at 3:44 AM, Sean Owen wrote:
> You would still need to use the 'job' file generated by the build to
> get an artifact with all the dependencies.
> You don't need to add Guava as a dependency; it already is one. It's
> the job
Can you elaborate on how to set these classpaths?
The job takes a classpath param, (from mahout --help)
-libjarscomma separated jar files to include in
the classpath.
is the needed code bundled with the job or does the path n
Pe 15.03.2012 13:14, Janina a scris:
Ahhh, ok...yes, I am using this generated jar file with Hadoop, but this
does not change the error message about com.google.common.primitives.Longs
.
Hello Janina,
Could you please enable bash debugging [1] and show us the command line
that submits yo
Ahhh, ok...yes, I am using this generated jar file with Hadoop, but this
does not change the error message about com.google.common.primitives.Longs
.
2012/3/15 Sean Owen
> After 'mvn package' you should see a file ending in 'job.jar' under
> target/ This is the jar file to use with Hadoop.
>
After 'mvn package' you should see a file ending in 'job.jar' under
target/ This is the jar file to use with Hadoop.
On Thu, Mar 15, 2012 at 10:56 AM, Janina wrote:
> These are great news! I was not quite sure if the item based recommender is
> fully distributes, but this helps! Thanks!
>
> I hav
These are great news! I was not quite sure if the item based recommender is
fully distributes, but this helps! Thanks!
I have removed the dependency now. But although it may again be a stupid
question: what do you exactly mean with "I am missing the job file"? Sorry,
but I am completely new to Mah
You would still need to use the 'job' file generated by the build to
get an artifact with all the dependencies.
You don't need to add Guava as a dependency; it already is one. It's
the job file that you're missing.
There are two RecommenderJobs. One is what I call pseudo-distributed,
yes. The othe
Thanks for your fast answer.
I haven't added the jar manually, but by adding the dependency to the
pom.xml. I tried it with and without the dependendy and with different
versions of the dependency, but it remained the same error message.
But the RecommenderJob is meant to be to run a pseudo distr
You shouldn't have to add anything to your jar, if you use the
supplied 'job' file which contains all transitive dependencies.
If you do add your own jars, I think you need to unpack and repack
them, not put them into the overall jar as a jar file, even with a
MANIFEST.MF entry. I am not sure that
Hi all,
I am trying to run a RecommenderJob from a Java program. I have added the
files users.txt and input.txt to a Hadoop VM and use the run-method of
RecommenderJob to start the calculation. But the the following error
message occurs while running the MapReducer:
Error: java.lang.ClassNotFound
12 matches
Mail list logo