Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-17 Thread Margusja

Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found it 
from pom profile section so I used it.


...
it compiled:
[INFO] 


[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools  SUCCESS [  
1.751 s]
[INFO] Apache Mahout . SUCCESS [  
0.484 s]
[INFO] Mahout Math ... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration  SUCCESS [  
1.857 s]
[INFO] Mahout Examples ... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package  SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers  SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings . SUCCESS [ 
40.376 s]
[INFO] 


[INFO] BUILD SUCCESS
[INFO] 


[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 



How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles

MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775

14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.output.compress 
is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class is 
deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.max.split.size 
is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.outputformat.class is deprecated. Instead, use 
mapreduce.job.outputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.map.tasks is 
deprecated. Instead, use mapreduce.job.maps
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.key.class is deprecated. Instead, use 
mapreduce.job.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.key.class is deprecated. Instead, use 
mapreduce.map.output.key.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.working.dir is 
deprecated. Instead, 

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-17 Thread Margusja

Okey sorry for the mess

[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.version=2.2.0 - did the trick


Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 17/03/14 12:16, Margusja wrote:

Hi thanks for your replay.

What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean 
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found 
it from pom profile section so I used it.


...
it compiled:
[INFO] 


[INFO] Reactor Summary:
[INFO]
[INFO] Mahout Build Tools  SUCCESS [  
1.751 s]
[INFO] Apache Mahout . SUCCESS [  
0.484 s]
[INFO] Mahout Math ... SUCCESS [ 
12.946 s]
[INFO] Mahout Core ... SUCCESS [ 
14.192 s]
[INFO] Mahout Integration  SUCCESS [  
1.857 s]
[INFO] Mahout Examples ... SUCCESS [ 
10.762 s]
[INFO] Mahout Release Package  SUCCESS [  
0.012 s]
[INFO] Mahout Math/Scala wrappers  SUCCESS [ 
25.431 s]
[INFO] Mahout Spark bindings . SUCCESS [ 
40.376 s]
[INFO] 


[INFO] BUILD SUCCESS
[INFO] 


[INFO] Total time: 01:48 min
[INFO] Finished at: 2014-03-17T12:06:31+02:00
[INFO] Final Memory: 79M/2947M
[INFO] 



How to check is there hadoop2 libs in use?

but unfortunately again:
[speech@h14 ~]$ mahout/bin/mahout seqdirectory -c UTF-8 -i 
/user/speech/demo -o demo-seqfiles

MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/bin/hadoop and 
HADOOP_CONF_DIR=/etc/hadoop/conf
MAHOUT-JOB: 
/home/speech/mahout/examples/target/mahout-examples-1.0-SNAPSHOT-job.jar
14/03/17 12:07:21 INFO common.AbstractJob: Command line arguments: 
{--charset=[UTF-8], --chunkSize=[64], --endPhase=[2147483647], 
--fileFilterClass=[org.apache.mahout.text.PrefixAdditionFilter], 
--input=[/user/speech/demo], --keyPrefix=[], --method=[mapreduce], 
--output=[demo-seqfiles], --startPhase=[0], --tempDir=[temp]}
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.input.dir is 
deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/17 12:07:22 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress
14/03/17 12:07:22 INFO Configuration.deprecation: mapred.output.dir is 
deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/17 12:07:22 INFO Configuration.deprecation: session.id is 
deprecated. Instead, use dfs.metrics.session-id
14/03/17 12:07:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
14/03/17 12:07:23 INFO input.FileInputFormat: Total input paths to 
process : 10
14/03/17 12:07:23 INFO input.CombineFileInputFormat: DEBUG: Terminated 
node allocation with : CompletedNodes: 4, size left: 29775

14/03/17 12:07:23 INFO mapreduce.JobSubmitter: number of splits:1
14/03/17 12:07:23 INFO Configuration.deprecation: user.name is 
deprecated. Instead, use mapreduce.job.user.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.compress is deprecated. Instead, use 
mapreduce.output.fileoutputformat.compress
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.jar is 
deprecated. Instead, use mapreduce.job.jar
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.reduce.tasks 
is deprecated. Instead, use mapreduce.job.reduces
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.mapoutput.value.class is deprecated. Instead, use 
mapreduce.map.output.value.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapreduce.map.class 
is deprecated. Instead, use mapreduce.job.map.class
14/03/17 12:07:23 INFO Configuration.deprecation: mapred.job.name is 
deprecated. Instead, use mapreduce.job.name
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapreduce.inputformat.class is deprecated. Instead, use 
mapreduce.job.inputformat.class
14/03/17 12:07:23 INFO Configuration.deprecation: 
mapred.max.split.size is deprecated. Instead, use 

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-04 Thread Margusja

Thank you for replay, I got it work.

[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r 
b07b2906c36defd389c8b5bd22bebc1bead8115b

Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source with checksum 704f1e463ebc4fb89353011407e965
This command was run using 
/usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar

[hduser@vm38 ~]$

The main problem I think was I had yarn binary in two places and I used 
wrong one that didn't use my yarn-site.xml.
Every time I look into .staging/job.../job.xml there were values from 
sourceyarn-default.xml/source even I set them in yarn-site.xml.


Typical mess up :)

Tervitades, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
-BEGIN PUBLIC KEY-
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
BjM8j36yJvoBVsfOHQIDAQAB
-END PUBLIC KEY-

On 04/03/14 05:14, Rohith Sharma K S wrote:

Hi

   The reason for  org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto 
overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet is 
hadoop is compiled with protoc-2.5.0 version, but in the classpath lower version of 
protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. 
Expected to have 2.5.0 version.



Thanks  Regards
Rohith Sharma K S



-Original Message-
From: Margusja [mailto:mar...@roo.ee]
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto 
overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
  dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-client/artifactId
version2.3.0/version
  /dependency
  dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-core/artifactId
  version1.2.1/version
  /dependency

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber 
mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with 
state FAILED due to: Application application_1393848686226_0018 failed 2 times 
due to AM Container for
appattempt_1393848686226_0018_02 exited with  exitCode: 1 due to:
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
  at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
  at org.apache.hadoop.util.Shell.run(Shell.java:379)
  at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
  at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
  at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
  at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total 
time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
/user/hduser/input/data666.noheader.data.
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Ted Yu
Can you tell us the hadoop release you're using ?

Seems there is inconsistency in protobuf library.


On Mon, Mar 3, 2014 at 8:01 AM, Margusja mar...@roo.ee wrote:

 Hi

 I even don't know what information to provide but my container log is:

 2014-03-03 17:36:05,311 FATAL [main] 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
 Error starting MRAppMaster
 java.lang.VerifyError: class 
 org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto
 overrides final method getUnknownFields.()Lcom/google/protobuf/
 UnknownFieldSet;
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
 at java.security.SecureClassLoader.defineClass(
 SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.getDeclaredConstructors0(Native Method)
 at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
 at java.lang.Class.getConstructor0(Class.java:2803)
 at java.lang.Class.getConstructor(Class.java:1718)
 at org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.
 newRecordInstance(RecordFactoryPBImpl.java:62)
 at org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36)
 at org.apache.hadoop.yarn.api.records.ApplicationId.
 newInstance(ApplicationId.java:49)
 at org.apache.hadoop.yarn.util.ConverterUtils.
 toApplicationAttemptId(ConverterUtils.java:137)
 at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(
 ConverterUtils.java:177)
 at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(
 MRAppMaster.java:1343)


 Where to start digging?

 --
 Tervitades, Margus (Margusja) Roo
 +372 51 48 780
 http://margus.roo.ee
 http://ee.linkedin.com/in/margusroo
 skype: margusja
 ldapsearch -x -h ldap.sk.ee -b c=EE (serialNumber=37303140314)
 -BEGIN PUBLIC KEY-
 MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE
 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl
 RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa
 BjM8j36yJvoBVsfOHQIDAQAB
 -END PUBLIC KEY-




Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Margusja

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-client/artifactId
  version2.3.0/version
/dependency
dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-core/artifactId
version1.2.1/version
/dependency

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to 
process : 1

2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application 
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/

2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running 
in uber mode : false

2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed 
with state FAILED due to: Application application_1393848686226_0018 
failed 2 times due to AM Container for 
appattempt_1393848686226_0018_02 exited with  exitCode: 1 due to: 
Exception from container-launch:

org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 
Total time for transactions(ms): 69 Number of transactions batched in 
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by 
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/.staging/job_1393848686226_0019/job.jar. 
BP-802201089-90.190.106.33-1393506052071 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 
90.190.106.33:50010 to delete [blk_1073742051_1227]
14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 90.190.106.33:50010 is added to 
blk_1073742058_1234{blockUCState=UNDER_CONSTRUCTION, 
primaryNodeIndex=-1, 

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Stanley Shi
Why you have 2 hadoop version in the same pom file? In this case, you are
not going to know which hadoop class you are actually using.

dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-client/artifactId
  version2.3.0/version
/dependency
dependency
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-core/artifactId
version1.2.1/version
/dependency



Regards,
*Stanley Shi,*



On Tue, Mar 4, 2014 at 1:15 AM, Margusja mar...@roo.ee wrote:

 Hi

 2.2.0 and 2.3.0 gave me the same container log.

 A little bit more details.
 I'll try to use external java client who submits job.
 some lines from maven pom.xml file:
 dependency
   groupIdorg.apache.hadoop/groupId
   artifactIdhadoop-client/artifactId
   version2.3.0/version
 /dependency
 dependency
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-core/artifactId
 version1.2.1/version
 /dependency

 lines from external client:
 ...
 2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to
 process : 1
 2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
 2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job:
 job_1393848686226_0018
 2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
 application_1393848686226_0018
 2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job:
 http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
 2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
 2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in
 uber mode : false
 2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
 2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed
 with state FAILED due to: Application application_1393848686226_0018 failed
 2 times due to AM Container for appattempt_1393848686226_0018_02
 exited with  exitCode: 1 due to: Exception from container-launch:
 org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
 Shell.java:589)
 at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
 launchContainer(DefaultContainerExecutor.java:195)
 at org.apache.hadoop.yarn.server.nodemanager.containermanager.
 launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at org.apache.hadoop.yarn.server.nodemanager.containermanager.
 launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(
 ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(
 ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 ...

 Lines from namenode:
 ...
 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900
 Total time for transactions(ms): 69 Number of transactions batched in
 Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
 1073742050_1226 90.190.106.33:50010
 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:
 /user/hduser/input/data666.noheader.data. 
 BP-802201089-90.190.106.33-1393506052071
 blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
 90.190.106.33:50010 to delete [blk_1073742050_1226]
 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
 updated: 90.190.106.33:50010 is added to blk_1073742056
 _1232{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
 ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:
 /user/hduser/input/data666.noheader.data is closed by
 DFSClient_NONMAPREDUCE_-915999412_15
 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: blk_
 1073742051_1227 90.190.106.33:50010
 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:
 /user/hduser/input/data666.noheader.data.info. 
 BP-802201089-90.190.106.33-1393506052071
 blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
 replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
 updated: 90.190.106.33:50010 is added to blk_1073742057
 _1233{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[
 ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:
 /user/hduser/input/data666.noheader.data.info is closed by
 DFSClient_NONMAPREDUCE_-915999412_15
 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:
 

RE: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Rohith Sharma K S
Hi

  The reason for  
org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final 
method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet is hadoop is 
compiled with protoc-2.5.0 version, but in the classpath lower version of 
protobuf is present.

1. Check MRAppMaster classpath, which version of protobuf is in classpath. 
Expected to have 2.5.0 version.
   

Thanks  Regards
Rohith Sharma K S



-Original Message-
From: Margusja [mailto:mar...@roo.ee] 
Sent: 03 March 2014 22:45
To: user@hadoop.apache.org
Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto 
overrides final method getUnknownFields

Hi

2.2.0 and 2.3.0 gave me the same container log.

A little bit more details.
I'll try to use external java client who submits job.
some lines from maven pom.xml file:
 dependency
   groupIdorg.apache.hadoop/groupId
   artifactIdhadoop-client/artifactId
   version2.3.0/version
 /dependency
 dependency
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-core/artifactId
 version1.2.1/version
 /dependency

lines from external client:
...
2014-03-03 17:36:01 INFO  FileInputFormat:287 - Total input paths to process : 1
2014-03-03 17:36:02 INFO  JobSubmitter:396 - number of splits:1
2014-03-03 17:36:03 INFO  JobSubmitter:479 - Submitting tokens for job: 
job_1393848686226_0018
2014-03-03 17:36:04 INFO  YarnClientImpl:166 - Submitted application
application_1393848686226_0018
2014-03-03 17:36:04 INFO  Job:1289 - The url to track the job: 
http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/
2014-03-03 17:36:04 INFO  Job:1334 - Running job: job_1393848686226_0018
2014-03-03 17:36:10 INFO  Job:1355 - Job job_1393848686226_0018 running in uber 
mode : false
2014-03-03 17:36:10 INFO  Job:1362 -  map 0% reduce 0%
2014-03-03 17:36:10 INFO  Job:1375 - Job job_1393848686226_0018 failed with 
state FAILED due to: Application application_1393848686226_0018 failed 2 times 
due to AM Container for
appattempt_1393848686226_0018_02 exited with  exitCode: 1 due to: 
Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
 at org.apache.hadoop.util.Shell.run(Shell.java:379)
 at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
 at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
 at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
 at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
...

Lines from namenode:
...
14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Total 
time for transactions(ms): 69 Number of transactions batched in
Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783
14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742050_1226 90.190.106.33:50010
14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask
90.190.106.33:50010 to delete [blk_1073742050_1226]
14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to 
blk_1073742056_1232{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data is closed by
DFSClient_NONMAPREDUCE_-915999412_15
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742051_1227 90.190.106.33:50010
14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock: 
/user/hduser/input/data666.noheader.data.info. 
BP-802201089-90.190.106.33-1393506052071
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]}
14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 90.190.106.33:50010 is added to 
blk_1073742057_1233{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0
14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile: 
/user/hduser/input/data666.noheader.data.info is closed by
DFSClient_NONMAPREDUCE_