Hey Tim, That was my best guess - looks like you've got the right metrics config.
Anyone else have an idea? Brian On Oct 14, 2009, at 9:37 AM, tim robertson wrote:
Hi Brian, Thanks for replying. I don't even know what the metrics really are so have never knowingly touched them. I have only done the basic core and site configuration along with master / slaves list and then a namenode format and started the cluster to get this error. All based on the current 0.20.1 download on apache. I think I am using the NullContext looking at the properties file... could you please verify this is correct? Many thanks for your help! Tim # Configuration of the "dfs" context for null dfs.class=org.apache.hadoop.metrics.spi.NullContext # Configuration of the "dfs" context for file #dfs.class=org.apache.hadoop.metrics.file.FileContext #dfs.period=10 #dfs.fileName=/tmp/dfsmetrics.log # Configuration of the "dfs" context for ganglia # dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext # dfs.period=10 # dfs.servers=localhost:8649 # Configuration of the "mapred" context for null mapred.class=org.apache.hadoop.metrics.spi.NullContext # Configuration of the "mapred" context for file #mapred.class=org.apache.hadoop.metrics.file.FileContext #mapred.period=10 #mapred.fileName=/tmp/mrmetrics.log # Configuration of the "mapred" context for ganglia # mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext # mapred.period=10 # mapred.servers=localhost:8649 # Configuration of the "jvm" context for null jvm.class=org.apache.hadoop.metrics.spi.NullContext # Configuration of the "jvm" context for file #jvm.class=org.apache.hadoop.metrics.file.FileContext #jvm.period=10 #jvm.fileName=/tmp/jvmmetrics.log # Configuration of the "jvm" context for ganglia # jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext # jvm.period=10 # jvm.servers=localhost:8649On Wed, Oct 14, 2009 at 3:12 PM, Brian Bockelman <bbock...@cse.unl.edu> wrote:Hey Tim,Can you see if that goes away if you use the null metrics context (the errorappears to be coming from metrics - have you touched them?)? Brian On Oct 14, 2009, at 4:33 AM, tim robertson wrote:Hi all,Using hadoop 0.20.1 I am seeing the following on my namenode startup.2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error register getProtocolVersion java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion Could someone please point me in the right direction for diagnosing where I have gone wrong? Thanks Tim /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = cluma.gbif.org/192.38.28.77 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.1 STARTUP_MSG: build =http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1- rc1-r 810220; compiled by 'oom' on Tue Sep 1 20:55:56 UTC 2009 ************************************************************/2009-10-14 11:09:53,010 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:Initializing RPC Metrics with hostName=NameNode, port=8020 2009-10-14 11:09:53,013 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: cluma.gbif.clu/192.168.76.254:80202009-10-14 11:09:53,015 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:Initializing JVM Metrics with processName=NameNode, sessionId=null 2009-10-14 11:09:53,019 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext 2009-10-14 11:09:53,056 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel 2009-10-14 11:09:53,057 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2009-10-14 11:09:53,057 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2009-10-14 11:09:53,064 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext 2009-10-14 11:09:53,065 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean 2009-10-14 11:09:53,090 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1 2009-10-14 11:09:53,094 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0 2009-10-14 11:09:53,094 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in 0 seconds. 2009-10-14 11:09:53,094 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /hadoop/name/current/edits of size 4 edits # 0 loaded in 0 seconds. 2009-10-14 11:09:53,098 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds. 2009-10-14 11:09:53,113 INFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loadingFSImage in 72 msecs 2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0 2009-10-14 11:09:53,114 INFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalidblocks = 0 2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0 2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0 2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs. 2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2009-10-14 11:09:53,114 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2009-10-14 11:09:53,198 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2009-10-14 11:09:53,242 INFO org.apache.hadoop.http.HttpServer: Portreturned by webServer.getConnectors()[0].getLocalPort() before open ()is -1. Opening the listener on 50070 2009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 500702009-10-14 11:09:53,243 INFO org.apache.hadoop.http.HttpServer: Jettybound to port 50070 2009-10-14 11:09:53,243 INFO org.mortbay.log: jetty-6.1.14 2009-10-14 11:09:53,433 INFO org.mortbay.log: Started selectchannelconnec...@0.0.0.0:50070 2009-10-14 11:09:53,434 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:500702009-10-14 11:09:53,434 INFO org.apache.hadoop.ipc.Server: IPC ServerResponder: starting2009-10-14 11:09:53,435 INFO org.apache.hadoop.ipc.Server: IPC Serverlistener on 8020: starting2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 0 on 8020: starting2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 1 on 8020: starting2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 2 on 8020: starting2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 3 on 8020: starting2009-10-14 11:09:53,438 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 4 on 8020: starting2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 6 on 8020: starting2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 7 on 8020: starting2009-10-14 11:09:53,439 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 8 on 8020: starting2009-10-14 11:09:53,441 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 9 on 8020: starting2009-10-14 11:09:53,444 INFO org.apache.hadoop.ipc.Server: IPC Serverhandler 5 on 8020: starting 2009-10-14 11:09:54,232 INFO org.apache.hadoop.ipc.Server: Error register getProtocolVersion java.lang.IllegalArgumentException: Duplicate metricsName:getProtocolVersion atorg.apache.hadoop.metrics.util.MetricsRegistry.add (MetricsRegistry.java:53)atorg.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init> (MetricsTimeVaryingRate.java:89)atorg.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init> (MetricsTimeVaryingRate.java:99)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) 2009-10-14 11:09:54,667 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.3:50010 storage DS-551038728-192.168.76.3-50010-1255511394664 2009-10-14 11:09:54,668 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.3:50010 2009-10-14 11:09:54,673 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.2:50010 storage DS-681115827-192.168.76.2-50010-1255511394673 2009-10-14 11:09:54,674 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.2:50010 2009-10-14 11:09:54,689 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.7:50010 storage DS-1731752631-192.168.76.7-50010-1255511394688 2009-10-14 11:09:54,689 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.7:50010 2009-10-14 11:09:54,701 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.6:50010 storage DS-338163265-192.168.76.6-50010-1255511394698 2009-10-14 11:09:54,701 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.6:50010 2009-10-14 11:09:59,688 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.1:50010 storage DS-988595128-192.168.76.1-50010-1255511399688 2009-10-14 11:09:59,689 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.1:50010 2009-10-14 11:10:04,277 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.9:50010 storage DS-1096719686-192.168.76.9-50010-1255511404276 2009-10-14 11:10:04,278 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.9:50010 2009-10-14 11:10:04,289 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.5:50010 storage DS-1405051412-192.168.76.5-50010-1255511404288 2009-10-14 11:10:04,289 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.5:50010 2009-10-14 11:10:05,961 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.8:50010 storage DS-1138617755-192.168.76.8-50010-1255511405960 2009-10-14 11:10:05,961 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.8:50010 2009-10-14 11:10:06,154 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.76.4:50010 storage DS-569548170-192.168.76.4-50010-1255511406153 2009-10-14 11:10:06,154 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.76.4:50010 2009-10-14 11:10:14,480 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 0time(s). 2009-10-14 11:10:35,483 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 1time(s). 2009-10-14 11:10:56,486 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 2time(s). 2009-10-14 11:11:17,489 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 3time(s). 2009-10-14 11:11:38,492 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 4time(s). 2009-10-14 11:11:59,495 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 5time(s). 2009-10-14 11:12:20,498 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 6time(s). 2009-10-14 11:12:41,501 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 7time(s). 2009-10-14 11:13:02,504 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 8time(s). 2009-10-14 11:13:23,506 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: cluma.gbif.clu/192.168.76.254:8020. Already tried 9time(s). etc.
smime.p7s
Description: S/MIME cryptographic signature