Hi,

Many thanks for the prompt response.

Yes, there are errors when tailing logs in name node.

2014-08-08 17:08:08,868 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = bam-n1/10.4.128.33
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
2014-08-08 17:08:09,050 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2014-08-08 17:08:09,062 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2014-08-08 17:08:09,063 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2014-08-08 17:08:09,063 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
started
2014-08-08 17:08:09,730 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2014-08-08 17:08:09,734 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
exists!
2014-08-08 17:08:09,741 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2014-08-08 17:08:09,750 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
NameNode registered.
2014-08-08 17:08:09,784 INFO org.apache.hadoop.hdfs.util.GSet: VM type
  = 64-bit
2014-08-08 17:08:09,784 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
memory = 19.33375 MB
2014-08-08 17:08:09,784 INFO org.apache.hadoop.hdfs.util.GSet: capacity
 = 2^21 = 2097152 entries
2014-08-08 17:08:09,784 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=2097152, actual=2097152
2014-08-08 17:08:09,837 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2014-08-08 17:08:09,837 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2014-08-08 17:08:09,837 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2014-08-08 17:08:09,841 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100
2014-08-08 17:08:09,841 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
2014-08-08 17:08:10,114 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean
2014-08-08 17:08:10,128 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2014-08-08 17:08:10,138 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 1
2014-08-08 17:08:10,140 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 0
2014-08-08 17:08:10,141 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 112 loaded in 0 seconds.
2014-08-08 17:08:10,141 INFO org.apache.hadoop.hdfs.server.common.Storage:
Edits file /home/hadoop/hadoop-1.0.4/dfs/name/current/edits of size 4 edits
# 0 loaded in 0 seconds.
2014-08-08 17:08:10,146 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 112 saved in 0 seconds.
2014-08-08 17:08:10,332 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 112 saved in 0 seconds.
2014-08-08 17:08:10,498 INFO
org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
entries 0 lookups
2014-08-08 17:08:10,498 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 690 msecs
2014-08-08 17:08:10,513 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
= 0
2014-08-08 17:08:10,513 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks = 0
2014-08-08 17:08:10,513 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks = 0
2014-08-08 17:08:10,513 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
 over-replicated blocks = 0
2014-08-08 17:08:10,513 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Safe mode termination scan for invalid, over- and under-replicated blocks
completed in 13 msec
2014-08-08 17:08:10,513 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Leaving safe mode after 0 secs.
2014-08-08 17:08:10,513 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2014-08-08 17:08:10,513 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2014-08-08 17:08:10,531 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2014-08-08 17:08:10,534 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2014-08-08 17:08:10,534 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
processing time, 1 msec clock time, 1 cycles
2014-08-08 17:08:10,534 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2014-08-08 17:08:10,534 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
processing time, 0 msec clock time, 1 cycles
2014-08-08 17:08:10,574 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
FSNamesystemMetrics registered.
2014-08-08 17:08:10,597 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
RpcDetailedActivityForPort9000 registered.
2014-08-08 17:08:10,597 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
RpcActivityForPort9000 registered.
2014-08-08 17:08:10,598 INFO org.apache.hadoop.ipc.Server: Starting
SocketReader
2014-08-08 17:08:10,605 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: node1/
10.4.128.33:9000
2014-08-08 17:08:10,807 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2014-08-08 17:08:10,887 INFO org.apache.hadoop.http.HttpServer: Added
global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-08-08 17:08:10,897 INFO org.apache.hadoop.http.HttpServer:
dfs.webhdfs.enabled = false
2014-08-08 17:08:10,907 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is
-1. Opening the listener on 50070
2014-08-08 17:08:10,908 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50070
webServer.getConnectors()[0].getLocalPort() returned 50070
2014-08-08 17:08:10,908 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50070
2014-08-08 17:08:10,909 INFO org.mortbay.log: jetty-6.1.26
2014-08-08 17:08:11,432 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50070
2014-08-08 17:08:11,432 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
0.0.0.0:50070
2014-08-08 17:08:11,432 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2014-08-08 17:08:11,436 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 9000: starting
2014-08-08 17:08:11,440 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 9000: starting
2014-08-08 17:08:11,440 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 9000: starting
2014-08-08 17:08:11,443 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 9000: starting
2014-08-08 17:08:11,444 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 9000: starting
2014-08-08 17:08:11,445 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 9000: starting
2014-08-08 17:08:11,445 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 9000: starting
2014-08-08 17:08:11,445 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 6 on 9000: starting
2014-08-08 17:08:11,445 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 7 on 9000: starting
2014-08-08 17:08:11,446 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 8 on 9000: starting
2014-08-08 17:08:11,451 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 9000: starting
2014-08-08 17:08:15,118 ERROR
org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hadoop cause:java.io.IOException: File
/home/hadoop/hadoop-1.0.4/mapred/system/jobtracker.info could only be
replicated to 0 nodes, instead of 1
2014-08-08 17:08:15,119 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 9000, call addBlock(/home/hadoop/hadoop-1.0.4/mapred/system/
jobtracker.info, DFSClient_-1865819579, null) from 10.4.128.33:47371:
error: java.io.IOException: File /home/hadoop/hadoop-1.0.4/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /home/hadoop/hadoop-1.0.4/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

What would be the issue here?

Thanks.


On Fri, Aug 8, 2014 at 10:18 PM, Inosh Goonewardena <in...@wso2.com> wrote:

> Hi Rajkumar,
>
> You can also use the hadoop web consoles for monitoring purposes.
>
> http://<NAME_NODE_IP>:50070/ – web UI of the NameNode daemon
> http://<JOB_TRACKER_IP>:50030/ – web UI of the JobTracker daemon
> http://<TASK_TRACKER_IP>:50060/ – web UI of the TaskTracker daemon
>
> Also check the log files of all hadoop daemons(<HADOOP_HOME/logs>) to make
> sure that there are no errors because sometimes there can connection issues
> due to firewall restrictions.
>
>
>
>
>
>
> On Fri, Aug 8, 2014 at 10:08 PM, Gihan Anuruddha <gi...@wso2.com> wrote:
>
>> Yes, This is correct. Based on above output you have 1 NameNode, 1 
>> SecondaryNameNode
>> and 3 DataNodes. Please use localhost:9000 and localhost:9001 to check
>> your cluster.
>>
>>
>> On Fri, Aug 8, 2014 at 12:21 PM, Rajkumar Rajaratnam <rajkum...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I am configuring Fully-Distributed, High-Availability BAM Setup by
>>> following [1]
>>>
>>> [2] says that when we run jps, the output should look like below,
>>>
>>>
>>>
>>> hadoop@ubuntu:/usr/local/hadoop$ jps
>>> 2287 TaskTracker
>>> 2149 JobTracker
>>> 1788 NameNode
>>>
>>>
>>> Which node will have above status? I didn't get above output in any
>>> nodes.
>>>
>>> After configuring hadoop cluster, when I run jps command in each node,
>>> output is as below,
>>>
>>> Node-1
>>> hadoop@bam-n1:~$ jps
>>> 15324 Jps
>>> 14953 NameNode
>>> 15130 JobTracker
>>>
>>> Node-2
>>> hadoop@bam-n2:~$ jps
>>> 14384 Jps
>>> 14232 SecondaryNameNode
>>>
>>> Node-3
>>> hadoop@bam-n3:~/hadoop-1.0.4$ jps
>>> 12259 TaskTracker
>>> 12329 Jps
>>> 12087 DataNode
>>>
>>> Node-4
>>> hadoop@bam-n4:/home/ubuntu$ jps
>>> 11907 DataNode
>>> 12157 Jps
>>> 12081 TaskTracker
>>>
>>> Node-5
>>> hadoop@bam-node-5:~/hadoop-1.0.4$ jps
>>> 12220 Jps
>>> 11960 DataNode
>>> 12134 TaskTracker
>>>
>>> Is hadoop cluster created correctly?
>>>
>>> 1.
>>> https://docs.wso2.com/display/CLUSTER420/Fully-Distributed%2C+High-Availability+BAM+Setup
>>> 2.
>>> https://docs.wso2.com/display/CLUSTER420/Known+Issues+in+Hadoop+Cluster
>>>
>>> Thanks.
>>>
>>> --
>>> Rajkumar Rajaratnam
>>> Software Engineer | WSO2, Inc.
>>> Mobile +94777568639 | +94783498120
>>>
>>> _______________________________________________
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> W.G. Gihan Anuruddha
>> Senior Software Engineer | WSO2, Inc.
>> M: +94772272595
>>
>> _______________________________________________
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Regards,
>
> Inosh Goonewardena
> Associate Technical Lead- WSO2 Inc.
> Mobile: +94779966317
>



-- 
Rajkumar Rajaratnam
Software Engineer | WSO2, Inc.
Mobile +94777568639 | +94783498120
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to