Re: Exceptions in Hadoop and Hbase log files

2013-10-20 Thread Vimal Jain
I will try that if i get them next time.
Could anyone please give the cause of this exceptions ?


On Fri, Oct 18, 2013 at 4:03 PM, divye sheth divs.sh...@gmail.com wrote:

 I would recommend you to stop the cluster and then start the daemons one by
 one.
 1. stop-dfs.sh
 2. hadoop-daemon.sh start namenode
 3. hadoop-daemon.sh start datanode

 This will show start up errors if any, also verify if the datanode is able
 to communicate with the namenode.

 Thanks
 Divye Sheth


 On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain vkj...@gmail.com wrote:

  Hi,
  I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
  1.1.2).
  I am getting certain exceptions in Hadoop's namenode and data node files
  which are :-
 
  Namenode :-
 
  2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
  NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
  2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
  Removing a node: /default-rack/192.168.20.30:50010
  2013-10-18 10:35:27,606 INFO
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
  transactions: 64 Total time for transactions(ms): 1Number
  of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
  2013-10-18 10:35:27,614 ERROR
  org.apache.hadoop.security.UserGroupInformation:
 PriviledgedActionException
  as:hadoop cause:java.io.IOException: File /h
 
 
 base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
  could only be replicated to 0 nodes, instead of 1
  2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
  handler 9 on 9000, call
  addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
  3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
  DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
  192.168.20.30:44990: error: java.io.I
  OException: File
 
 
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
  could only be replicated to 0 nodes, instead
   of 1
  java.io.IOException: File
 
 
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
  could only be replicated to 0 nodes
  , instead of 1
  at
 
 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
  at
 
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
  at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at
 
 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
 
 
  Data node :-
 
  2013-10-18 06:13:14,499 WARN
  org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
  192.168.20.30:50010, storageID=DS-1816106352-192.16
  8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
  while serving blk_-3215981820534544354_52215 to /192.168.20.30:
  java.net.SocketTimeoutException: 48 millis timeout while waiting for
  channel to be ready for write. ch :
  java.nio.channels.SocketChannel[connected
   local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
  at
 
 
 org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
  at
 
 
 org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
  at
 
 
 org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
  at
 
 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
  at java.lang.Thread.run(Thread.java:662)
 
 
 
 
 
 
 
  --
  Thanks and Regards,
  Vimal Jain
 




-- 
Thanks and Regards,
Vimal Jain


Re: Exceptions in Hadoop and Hbase log files

2013-10-18 Thread Vimal Jain
Some more exceptions in data node log -:

2013-10-18 10:37:53,693 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at sun.proxy.$Proxy5.blockReceived(Unknown Source)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1006)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
at java.lang.Thread.run(Thread.java:662)

2013-10-18 10:37:53,696 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Got blockRec
eived message from unregistered or dead node blk_-2949905629769882833_52274
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4188)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1069)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)

These exceptions keep on filling my disk space.
Let me know if you need more information.
Please help here.


On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
 1.1.2).
 I am getting certain exceptions in Hadoop's namenode and data node files
 which are :-

 Namenode :-

 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
 Removing a node: /default-rack/192.168.20.30:50010
 2013-10-18 10:35:27,606 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
 transactions: 64 Total time for transactions(ms): 1Number
 of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
 2013-10-18 10:35:27,614 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: File /h
 base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead of 1
 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 9 on 9000, call
 addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
 DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
 192.168.20.30:44990: error: java.io.I
 OException: File
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead
  of 1
 java.io.IOException: File
 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes
 , instead of 1
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 Data node :-

 2013-10-18 06:13:14,499 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
 192.168.20.30:50010, 

Re: Exceptions in Hadoop and Hbase log files

2013-10-18 Thread divye sheth
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode

This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.

Thanks
Divye Sheth


On Fri, Oct 18, 2013 at 3:51 PM, Vimal Jain vkj...@gmail.com wrote:

 Hi,
 I am running Hbase in pseudo distributed mode.( Hbase 0.94.7 and Hadoop
 1.1.2).
 I am getting certain exceptions in Hadoop's namenode and data node files
 which are :-

 Namenode :-

 2013-10-18 10:33:37,218 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
 NameSystem.heartbeatCheck: lost heartbeat from 192.168.20.30:50010
 2013-10-18 10:33:37,242 INFO org.apache.hadoop.net.NetworkTopology:
 Removing a node: /default-rack/192.168.20.30:50010
 2013-10-18 10:35:27,606 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
 transactions: 64 Total time for transactions(ms): 1Number
 of transactions batched in Syncs: 0 Number of syncs: 43 SyncTimes(ms): 86
 2013-10-18 10:35:27,614 ERROR
 org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
 as:hadoop cause:java.io.IOException: File /h

 base/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead of 1
 2013-10-18 10:35:27,895 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 9 on 9000, call
 addBlock(/hbase/event_data/433b61f2a4ebff8f2e4b89890508a
 3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e,
 DFSClient_hb_rs_hbase.rummycircle.com,60020,1382012725057, null) from
 192.168.20.30:44990: error: java.io.I
 OException: File

 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes, instead
  of 1
 java.io.IOException: File

 /hbase/event_data/433b61f2a4ebff8f2e4b89890508a3b7/.tmp/99797a61a8f7471cb6df8f7b95f18e9e
 could only be replicated to 0 nodes
 , instead of 1
 at

 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
 at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at

 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)


 Data node :-

 2013-10-18 06:13:14,499 WARN
 org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
 192.168.20.30:50010, storageID=DS-1816106352-192.16
 8.20.30-50010-1369314076237, infoPort=50075, ipcPort=50020):Got exception
 while serving blk_-3215981820534544354_52215 to /192.168.20.30:
 java.net.SocketTimeoutException: 48 millis timeout while waiting for
 channel to be ready for write. ch :
 java.nio.channels.SocketChannel[connected
  local=/192.168.20.30:50010 remote=/192.168.20.30:36188]
 at

 org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
 at

 org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
 at

 org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
 at

 org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
 at

 org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
 at

 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
 at

 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
 at java.lang.Thread.run(Thread.java:662)







 --
 Thanks and Regards,
 Vimal Jain