Hi thanks for reply.

Here is my output:

[hduser@vm38 ~]$ /usr/lib/hadoop/bin/hadoop version Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using /usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar

[hduser@vm38 ~]$ /usr/lib/hadoop/bin/hadoop jar mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar org.apache.mahout.classifier.df.mapreduce.BuildForest -d input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5 -p -t 100 -o nsl-forest

...
14/03/04 16:22:51 INFO mapreduce.Job:  map 0% reduce 0%
14/03/04 16:23:12 INFO mapreduce.Job:  map 100% reduce 0%
14/03/04 16:23:43 INFO mapreduce.Job: Job job_1393936067845_0013 completed successfully
14/03/04 16:23:44 INFO mapreduce.Job: Counters: 27
        File System Counters
                FILE: Number of bytes read=2994
                FILE: Number of bytes written=80677
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=880103
                HDFS: Number of bytes written=2436546
                HDFS: Number of read operations=5
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=45253
                Total time spent by all reduces in occupied slots (ms)=0
        Map-Reduce Framework
                Map input records=9994
                Map output records=100
                Input split bytes=123
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=456
                CPU time spent (ms)=36010
                Physical memory (bytes) snapshot=180752384
                Virtual memory (bytes) snapshot=994275328
                Total committed heap usage (bytes)=101187584
        File Input Format Counters
                Bytes Read=879980
        File Output Format Counters
                Bytes Written=2436546
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected at org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.processOutput(PartialBuilder.java:113) at org.apache.mahout.classifier.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:89) at org.apache.mahout.classifier.df.mapreduce.Builder.build(Builder.java:294) at org.apache.mahout.classifier.df.mapreduce.BuildForest.buildForest(BuildForest.java:228) at org.apache.mahout.classifier.df.mapreduce.BuildForest.run(BuildForest.java:188)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.mahout.classifier.df.mapreduce.BuildForest.main(BuildForest.java:252)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)



Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----

On 04/03/14 16:11, Sergey Svinarchuk wrote:
Sory, I didn't see that you try use mahout-1.0-snapshot.
You used /usr/lib/hadoop-yarn/bin/yarn but need use
/usr/lib/hadoop/bin/hadoop and then your example will be success.


On Tue, Mar 4, 2014 at 3:45 PM, Sergey Svinarchuk <
ssvinarc...@hortonworks.com> wrote:

Mahout 0.9 not supported hadoop 2 dependencies.
You can use mahout-1.0-SNAPSHOT or add to your mahout patch from
https://issues.apache.org/jira/browse/MAHOUT-1329 for added hadoop 2
support.


On Tue, Mar 4, 2014 at 3:38 PM, Margusja <mar...@roo.ee> wrote:

Hi

following command:
/usr/lib/hadoop-yarn/bin/yarn jar 
mahout-distribution-0.9/mahout-examples-0.9.jar
org.apache.mahout.classifier.df.mapreduce.BuildForest -d
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5
-p -t 100 -o nsl-forest

When I used hadoop 1.x then it worked.
Now I use hadoop-2.2.0 it gives me:
14/03/04 15:25:58 INFO mapreduce.BuildForest: Partial Mapred
implementation
14/03/04 15:25:58 INFO mapreduce.BuildForest: Building the forest...
14/03/04 15:26:01 INFO client.RMProxy: Connecting to ResourceManager at /
0.0.0.0:8032
14/03/04 15:26:05 INFO input.FileInputFormat: Total input paths to
process : 1
14/03/04 15:26:05 INFO mapreduce.JobSubmitter: number of splits:1
14/03/04 15:26:05 INFO Configuration.deprecation: user.name is
deprecated. Instead, use mapreduce.job.user.name
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.jar is
deprecated. Instead, use mapreduce.job.jar
14/03/04 15:26:05 INFO Configuration.deprecation:
mapred.cache.files.filesizes is deprecated. Instead, use
mapreduce.job.cache.files.filesizes
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.cache.files is
deprecated. Instead, use mapreduce.job.cache.files
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.reduce.tasks is
deprecated. Instead, use mapreduce.job.reduces
14/03/04 15:26:05 INFO Configuration.deprecation:
mapred.output.value.class is deprecated. Instead, use
mapreduce.job.output.value.class
14/03/04 15:26:05 INFO Configuration.deprecation: mapreduce.map.class is
deprecated. Instead, use mapreduce.job.map.class
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.job.name is
deprecated. Instead, use mapreduce.job.name
14/03/04 15:26:05 INFO Configuration.deprecation:
mapreduce.inputformat.class is deprecated. Instead, use
mapreduce.job.inputformat.class
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.input.dir is
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.output.dir is
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/04 15:26:05 INFO Configuration.deprecation:
mapreduce.outputformat.class is deprecated. Instead, use
mapreduce.job.outputformat.class
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.map.tasks is
deprecated. Instead, use mapreduce.job.maps
14/03/04 15:26:05 INFO Configuration.deprecation:
mapred.cache.files.timestamps is deprecated. Instead, use
mapreduce.job.cache.files.timestamps
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.output.key.class
is deprecated. Instead, use mapreduce.job.output.key.class
14/03/04 15:26:05 INFO Configuration.deprecation: mapred.working.dir is
deprecated. Instead, use mapreduce.job.working.dir
14/03/04 15:26:06 INFO mapreduce.JobSubmitter: Submitting tokens for job:
job_1393936067845_0011
14/03/04 15:26:07 INFO impl.YarnClientImpl: Submitted application
application_1393936067845_0011 to ResourceManager at /0.0.0.0:8032
14/03/04 15:26:07 INFO mapreduce.Job: The url to track the job:
http://vm38.dbweb.ee:8088/proxy/application_1393936067845_0011/
14/03/04 15:26:07 INFO mapreduce.Job: Running job: job_1393936067845_0011
14/03/04 15:26:36 INFO mapreduce.Job: Job job_1393936067845_0011 running
in uber mode : false
14/03/04 15:26:36 INFO mapreduce.Job:  map 0% reduce 0%
14/03/04 15:27:00 INFO mapreduce.Job:  map 100% reduce 0%
14/03/04 15:27:26 INFO mapreduce.Job: Job job_1393936067845_0011
completed successfully
14/03/04 15:27:26 INFO mapreduce.Job: Counters: 27
         File System Counters
                 FILE: Number of bytes read=2994
                 FILE: Number of bytes written=80677
                 FILE: Number of read operations=0
                 FILE: Number of large read operations=0
                 FILE: Number of write operations=0
                 HDFS: Number of bytes read=880103
                 HDFS: Number of bytes written=2483042
                 HDFS: Number of read operations=5
                 HDFS: Number of large read operations=0
                 HDFS: Number of write operations=2
         Job Counters
                 Launched map tasks=1
                 Data-local map tasks=1
                 Total time spent by all maps in occupied slots (ms)=46056
                 Total time spent by all reduces in occupied slots (ms)=0
         Map-Reduce Framework
                 Map input records=9994
                 Map output records=100
                 Input split bytes=123
                 Spilled Records=0
                 Failed Shuffles=0
                 Merged Map outputs=0
                 GC time elapsed (ms)=425
                 CPU time spent (ms)=32890
                 Physical memory (bytes) snapshot=189755392
                 Virtual memory (bytes) snapshot=992145408
                 Total committed heap usage (bytes)=111673344
         File Input Format Counters
                 Bytes Read=879980
         File Output Format Counters
                 Bytes Written=2483042
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found
interface org.apache.hadoop.mapreduce.JobContext, but class was expected
         at org.apache.mahout.classifier.df.mapreduce.partial.
PartialBuilder.processOutput(PartialBuilder.java:113)
         at org.apache.mahout.classifier.df.mapreduce.partial.
PartialBuilder.parseOutput(PartialBuilder.java:89)
         at org.apache.mahout.classifier.df.mapreduce.Builder.build(
Builder.java:294)
         at org.apache.mahout.classifier.df.mapreduce.BuildForest.
buildForest(BuildForest.java:228)
         at org.apache.mahout.classifier.df.mapreduce.BuildForest.run(
BuildForest.java:188)
         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
         at org.apache.mahout.classifier.df.mapreduce.BuildForest.main(
BuildForest.java:252)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:57)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I even downloaded source from https://github.com/apache/mahout.git and
build it like:
mvn -DskipTests -Dhadoop2.version=2.2.0 clean install
then used command line:
/usr/lib/hadoop-yarn/bin/yarn jar 
mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
org.apache.mahout.classifier.df.mapreduce.BuildForest -d
input/data666.noheader.data -ds input/data666.noheader.data.info -sl 5
-p -t 100 -o nsl-forest

and got the same error like above.

Is there something wrong in my side or hadoop-2.2.0 and mahout can not
play each other anymore?

The typical example:
/usr/lib/hadoop-yarn/bin/yarn jar /usr/lib/hadoop-mapreduce/
hadoop-mapreduce-examples-2.2.0.2.0.6.0-101.jar pi 2 5
works.

--
Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-----END PUBLIC KEY-----



Reply via email to